Test Report: QEMU_macOS 19166

                    
                      98210e04775e460720dbaecad9184210c804dd29:2024-07-01:35133
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.78
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.89
27 TestAddons/Setup 10.36
28 TestCertOptions 9.93
29 TestCertExpiration 195.25
30 TestDockerFlags 10.21
31 TestForceSystemdFlag 10.04
32 TestForceSystemdEnv 12.17
38 TestErrorSpam/setup 9.8
47 TestFunctional/serial/StartWithProxy 9.96
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.96
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 94.67
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.47
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 36.97
141 TestMultiControlPlane/serial/StartCluster 9.8
142 TestMultiControlPlane/serial/DeployApp 113.68
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 58.47
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.62
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 2.22
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.88
165 TestJSONOutput/start/Command 9.84
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.04
194 TestMinikubeProfile 10.01
197 TestMountStart/serial/StartWithMountFirst 10.01
200 TestMultiNode/serial/FreshStart2Nodes 10
201 TestMultiNode/serial/DeployApp2Nodes 116.84
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 48.07
209 TestMultiNode/serial/RestartKeepsNodes 7.32
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.39
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.05
217 TestPreload 10.09
219 TestScheduledStopUnix 9.98
220 TestSkaffold 12.15
223 TestRunningBinaryUpgrade 600.2
225 TestKubernetesUpgrade 18.51
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.11
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.94
241 TestStoppedBinaryUpgrade/Upgrade 573.31
243 TestPause/serial/Start 9.97
253 TestNoKubernetes/serial/StartWithK8s 9.85
254 TestNoKubernetes/serial/StartWithStopK8s 5.3
255 TestNoKubernetes/serial/Start 5.27
259 TestNoKubernetes/serial/StartNoArgs 5.3
261 TestNetworkPlugins/group/auto/Start 9.7
262 TestNetworkPlugins/group/kindnet/Start 9.69
263 TestNetworkPlugins/group/flannel/Start 9.87
264 TestNetworkPlugins/group/enable-default-cni/Start 9.82
265 TestNetworkPlugins/group/bridge/Start 9.74
266 TestNetworkPlugins/group/kubenet/Start 9.83
267 TestNetworkPlugins/group/custom-flannel/Start 9.79
268 TestNetworkPlugins/group/calico/Start 9.66
269 TestNetworkPlugins/group/false/Start 9.74
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.96
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.94
285 TestStartStop/group/embed-certs/serial/FirstStart 11.18
286 TestStartStop/group/no-preload/serial/DeployApp 0.1
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
290 TestStartStop/group/no-preload/serial/SecondStart 5.57
291 TestStartStop/group/embed-certs/serial/DeployApp 0.1
292 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
296 TestStartStop/group/no-preload/serial/Pause 0.1
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.81
301 TestStartStop/group/embed-certs/serial/SecondStart 6.71
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
306 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
307 TestStartStop/group/embed-certs/serial/Pause 0.11
310 TestStartStop/group/newest-cni/serial/FirstStart 9.82
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.94
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.25
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.774737916s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1340e4c-ebc5-4f7e-bf06-fa3beae86a06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-666000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a473ace-bf0f-4430-99d3-27417e7e3dc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19166"}}
	{"specversion":"1.0","id":"04194a3c-7e9d-4a06-97e6-96729f8086a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig"}}
	{"specversion":"1.0","id":"26dcd500-6c84-4cd5-abfd-fc32c065ccc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f1fefe63-f7c9-4f3a-99bb-302edb387bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e3f00da8-9653-4727-9ea0-63da0fdd4616","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube"}}
	{"specversion":"1.0","id":"6781b93b-6d4c-4b6f-8c27-7c3fec3920e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c28f7c5b-45e7-4a48-98d2-11fd29349d27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ef9371d-da34-491b-97a1-328055f9922b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"75840bec-fb83-48b1-881b-d9ba05dce202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a67dc5be-2626-4d9c-b96a-0920ebc30977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-666000\" primary control-plane node in \"download-only-666000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb7c4195-9616-4959-8a0e-f826f0deeefa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c06e33ec-2b14-4381-bd21-df43a3385dc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60] Decompressors:map[bz2:0x14000887aa0 gz:0x14000887aa8 tar:0x14000887a50 tar.bz2:0x14000887a60 tar.gz:0x14000887a70 tar.xz:0x14000887a80 tar.zst:0x14000887a90 tbz2:0x14000887a60 tgz:0x14
000887a70 txz:0x14000887a80 tzst:0x14000887a90 xz:0x14000887ab0 zip:0x14000887ac0 zst:0x14000887ab8] Getters:map[file:0x140004a6950 http:0x140008b41e0 https:0x140008b4230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f98a3c5b-337e-4afb-a07b-429cc5293a34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:49:39.546065   10005 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:49:39.546202   10005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:39.546206   10005 out.go:304] Setting ErrFile to fd 2...
	I0701 04:49:39.546208   10005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:39.546336   10005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	W0701 04:49:39.546437   10005 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19166-9507/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19166-9507/.minikube/config/config.json: no such file or directory
	I0701 04:49:39.547714   10005 out.go:298] Setting JSON to true
	I0701 04:49:39.565302   10005 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6548,"bootTime":1719828031,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:49:39.565391   10005 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:49:39.571537   10005 out.go:97] [download-only-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:49:39.571702   10005 notify.go:220] Checking for updates...
	W0701 04:49:39.571763   10005 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 04:49:39.575441   10005 out.go:169] MINIKUBE_LOCATION=19166
	I0701 04:49:39.581527   10005 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:49:39.585403   10005 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:49:39.588504   10005 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:49:39.591494   10005 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	W0701 04:49:39.596442   10005 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 04:49:39.596641   10005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:49:39.599485   10005 out.go:97] Using the qemu2 driver based on user configuration
	I0701 04:49:39.599505   10005 start.go:297] selected driver: qemu2
	I0701 04:49:39.599509   10005 start.go:901] validating driver "qemu2" against <nil>
	I0701 04:49:39.599603   10005 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 04:49:39.602493   10005 out.go:169] Automatically selected the socket_vmnet network
	I0701 04:49:39.608015   10005 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0701 04:49:39.608139   10005 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 04:49:39.608194   10005 cni.go:84] Creating CNI manager for ""
	I0701 04:49:39.608211   10005 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0701 04:49:39.608281   10005 start.go:340] cluster config:
	{Name:download-only-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:49:39.612385   10005 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:49:39.616496   10005 out.go:97] Downloading VM boot image ...
	I0701 04:49:39.616513   10005 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso
	I0701 04:49:44.065320   10005 out.go:97] Starting "download-only-666000" primary control-plane node in "download-only-666000" cluster
	I0701 04:49:44.065359   10005 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 04:49:44.117179   10005 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 04:49:44.117201   10005 cache.go:56] Caching tarball of preloaded images
	I0701 04:49:44.117364   10005 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 04:49:44.123897   10005 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0701 04:49:44.123904   10005 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:44.197634   10005 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 04:49:49.206174   10005 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:49.206325   10005 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:49.902069   10005 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0701 04:49:49.902290   10005 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/download-only-666000/config.json ...
	I0701 04:49:49.902308   10005 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/download-only-666000/config.json: {Name:mkca6ff7504630bcae3120017be8656fc2eb8640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 04:49:49.902576   10005 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 04:49:49.903018   10005 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0701 04:49:50.243094   10005 out.go:169] 
	W0701 04:49:50.248075   10005 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60] Decompressors:map[bz2:0x14000887aa0 gz:0x14000887aa8 tar:0x14000887a50 tar.bz2:0x14000887a60 tar.gz:0x14000887a70 tar.xz:0x14000887a80 tar.zst:0x14000887a90 tbz2:0x14000887a60 tgz:0x14000887a70 txz:0x14000887a80 tzst:0x14000887a90 xz:0x14000887ab0 zip:0x14000887ac0 zst:0x14000887ab8] Getters:map[file:0x140004a6950 http:0x140008b41e0 https:0x140008b4230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0701 04:49:50.248099   10005 out_reason.go:110] 
	W0701 04:49:50.256005   10005 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:49:50.259970   10005 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-666000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-143000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-143000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.740643542s)

                                                
                                                
-- stdout --
	* [offline-docker-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-143000" primary control-plane node in "offline-docker-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:01:42.190000   11484 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:01:42.190151   11484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:42.190154   11484 out.go:304] Setting ErrFile to fd 2...
	I0701 05:01:42.190156   11484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:42.190309   11484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:01:42.191479   11484 out.go:298] Setting JSON to false
	I0701 05:01:42.208881   11484 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7271,"bootTime":1719828031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:01:42.209005   11484 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:01:42.213012   11484 out.go:177] * [offline-docker-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:01:42.219932   11484 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:01:42.219964   11484 notify.go:220] Checking for updates...
	I0701 05:01:42.226965   11484 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:01:42.230035   11484 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:01:42.232975   11484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:01:42.236014   11484 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:01:42.238946   11484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:01:42.242298   11484 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:01:42.242364   11484 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:01:42.245983   11484 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:01:42.252933   11484 start.go:297] selected driver: qemu2
	I0701 05:01:42.252944   11484 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:01:42.252952   11484 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:01:42.254997   11484 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:01:42.257978   11484 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:01:42.260925   11484 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:01:42.260955   11484 cni.go:84] Creating CNI manager for ""
	I0701 05:01:42.260962   11484 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:01:42.260966   11484 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:01:42.260999   11484 start.go:340] cluster config:
	{Name:offline-docker-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:01:42.264529   11484 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:42.271913   11484 out.go:177] * Starting "offline-docker-143000" primary control-plane node in "offline-docker-143000" cluster
	I0701 05:01:42.275957   11484 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:01:42.275994   11484 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:01:42.276001   11484 cache.go:56] Caching tarball of preloaded images
	I0701 05:01:42.276077   11484 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:01:42.276082   11484 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:01:42.276143   11484 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/offline-docker-143000/config.json ...
	I0701 05:01:42.276154   11484 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/offline-docker-143000/config.json: {Name:mk6071506475ebb0cf9024b11bc0df89b918a4ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:01:42.276535   11484 start.go:360] acquireMachinesLock for offline-docker-143000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:01:42.276568   11484 start.go:364] duration metric: took 25µs to acquireMachinesLock for "offline-docker-143000"
	I0701 05:01:42.276580   11484 start.go:93] Provisioning new machine with config: &{Name:offline-docker-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:01:42.276623   11484 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:01:42.280952   11484 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:01:42.296843   11484 start.go:159] libmachine.API.Create for "offline-docker-143000" (driver="qemu2")
	I0701 05:01:42.296878   11484 client.go:168] LocalClient.Create starting
	I0701 05:01:42.296953   11484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:01:42.296983   11484 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:42.296991   11484 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:42.297041   11484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:01:42.297063   11484 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:42.297071   11484 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:42.297519   11484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:01:42.431383   11484 main.go:141] libmachine: Creating SSH key...
	I0701 05:01:42.468935   11484 main.go:141] libmachine: Creating Disk image...
	I0701 05:01:42.468947   11484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:01:42.469148   11484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2
	I0701 05:01:42.486455   11484 main.go:141] libmachine: STDOUT: 
	I0701 05:01:42.486482   11484 main.go:141] libmachine: STDERR: 
	I0701 05:01:42.486536   11484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2 +20000M
	I0701 05:01:42.495025   11484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:01:42.495042   11484 main.go:141] libmachine: STDERR: 
	I0701 05:01:42.495060   11484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2
	I0701 05:01:42.495065   11484 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:01:42.495100   11484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:57:b4:5b:11:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2
	I0701 05:01:42.496824   11484 main.go:141] libmachine: STDOUT: 
	I0701 05:01:42.496841   11484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:01:42.496859   11484 client.go:171] duration metric: took 199.977666ms to LocalClient.Create
	I0701 05:01:44.498952   11484 start.go:128] duration metric: took 2.222324625s to createHost
	I0701 05:01:44.498990   11484 start.go:83] releasing machines lock for "offline-docker-143000", held for 2.222426208s
	W0701 05:01:44.499019   11484 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:44.512060   11484 out.go:177] * Deleting "offline-docker-143000" in qemu2 ...
	W0701 05:01:44.521154   11484 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:44.521168   11484 start.go:728] Will try again in 5 seconds ...
	I0701 05:01:49.523423   11484 start.go:360] acquireMachinesLock for offline-docker-143000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:01:49.523935   11484 start.go:364] duration metric: took 345.625µs to acquireMachinesLock for "offline-docker-143000"
	I0701 05:01:49.524105   11484 start.go:93] Provisioning new machine with config: &{Name:offline-docker-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:01:49.524371   11484 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:01:49.535120   11484 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:01:49.585478   11484 start.go:159] libmachine.API.Create for "offline-docker-143000" (driver="qemu2")
	I0701 05:01:49.585535   11484 client.go:168] LocalClient.Create starting
	I0701 05:01:49.585641   11484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:01:49.585696   11484 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:49.585709   11484 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:49.585784   11484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:01:49.585828   11484 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:49.585839   11484 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:49.586507   11484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:01:49.725729   11484 main.go:141] libmachine: Creating SSH key...
	I0701 05:01:49.837833   11484 main.go:141] libmachine: Creating Disk image...
	I0701 05:01:49.837839   11484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:01:49.837996   11484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2
	I0701 05:01:49.846941   11484 main.go:141] libmachine: STDOUT: 
	I0701 05:01:49.846962   11484 main.go:141] libmachine: STDERR: 
	I0701 05:01:49.847003   11484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2 +20000M
	I0701 05:01:49.854689   11484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:01:49.854701   11484 main.go:141] libmachine: STDERR: 
	I0701 05:01:49.854710   11484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2
	I0701 05:01:49.854714   11484 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:01:49.854739   11484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2e:fa:16:b1:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/offline-docker-143000/disk.qcow2
	I0701 05:01:49.856284   11484 main.go:141] libmachine: STDOUT: 
	I0701 05:01:49.856299   11484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:01:49.856310   11484 client.go:171] duration metric: took 270.770125ms to LocalClient.Create
	I0701 05:01:51.858467   11484 start.go:128] duration metric: took 2.334082541s to createHost
	I0701 05:01:51.858536   11484 start.go:83] releasing machines lock for "offline-docker-143000", held for 2.334562083s
	W0701 05:01:51.859033   11484 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:51.867654   11484 out.go:177] 
	W0701 05:01:51.871749   11484 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:01:51.871777   11484 out.go:239] * 
	* 
	W0701 05:01:51.874260   11484 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:01:51.883539   11484 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-143000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-01 05:01:51.900011 -0700 PDT m=+732.478423376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-143000 -n offline-docker-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-143000 -n offline-docker-143000: exit status 7 (71.981792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-143000
--- FAIL: TestOffline (9.89s)

                                                
                                    
x
+
TestAddons/Setup (10.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-711000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-711000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.36116825s)

                                                
                                                
-- stdout --
	* [addons-711000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-711000" primary control-plane node in "addons-711000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-711000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:49:59.771997   10085 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:49:59.772134   10085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:59.772137   10085 out.go:304] Setting ErrFile to fd 2...
	I0701 04:49:59.772140   10085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:59.772301   10085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:49:59.773365   10085 out.go:298] Setting JSON to false
	I0701 04:49:59.789612   10085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6568,"bootTime":1719828031,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:49:59.789673   10085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:49:59.794201   10085 out.go:177] * [addons-711000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:49:59.801259   10085 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:49:59.801337   10085 notify.go:220] Checking for updates...
	I0701 04:49:59.808202   10085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:49:59.811149   10085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:49:59.814210   10085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:49:59.817183   10085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:49:59.820143   10085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:49:59.823420   10085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:49:59.827154   10085 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 04:49:59.834375   10085 start.go:297] selected driver: qemu2
	I0701 04:49:59.834381   10085 start.go:901] validating driver "qemu2" against <nil>
	I0701 04:49:59.834388   10085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:49:59.836759   10085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 04:49:59.840215   10085 out.go:177] * Automatically selected the socket_vmnet network
	I0701 04:49:59.843325   10085 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 04:49:59.843362   10085 cni.go:84] Creating CNI manager for ""
	I0701 04:49:59.843370   10085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 04:49:59.843374   10085 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 04:49:59.843405   10085 start.go:340] cluster config:
	{Name:addons-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:49:59.847230   10085 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:49:59.856152   10085 out.go:177] * Starting "addons-711000" primary control-plane node in "addons-711000" cluster
	I0701 04:49:59.860189   10085 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:49:59.860207   10085 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:49:59.860216   10085 cache.go:56] Caching tarball of preloaded images
	I0701 04:49:59.860285   10085 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:49:59.860291   10085 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:49:59.860506   10085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/addons-711000/config.json ...
	I0701 04:49:59.860518   10085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/addons-711000/config.json: {Name:mk69f233a0cc15b56a1f4d2d66c8109175c28566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 04:49:59.860957   10085 start.go:360] acquireMachinesLock for addons-711000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:49:59.861025   10085 start.go:364] duration metric: took 62.084µs to acquireMachinesLock for "addons-711000"
	I0701 04:49:59.861042   10085 start.go:93] Provisioning new machine with config: &{Name:addons-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 04:49:59.861074   10085 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 04:49:59.870133   10085 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0701 04:49:59.888453   10085 start.go:159] libmachine.API.Create for "addons-711000" (driver="qemu2")
	I0701 04:49:59.888486   10085 client.go:168] LocalClient.Create starting
	I0701 04:49:59.888627   10085 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 04:49:59.990733   10085 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 04:50:00.224116   10085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 04:50:00.404514   10085 main.go:141] libmachine: Creating SSH key...
	I0701 04:50:00.683824   10085 main.go:141] libmachine: Creating Disk image...
	I0701 04:50:00.683838   10085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 04:50:00.684062   10085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2
	I0701 04:50:00.693951   10085 main.go:141] libmachine: STDOUT: 
	I0701 04:50:00.693988   10085 main.go:141] libmachine: STDERR: 
	I0701 04:50:00.694046   10085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2 +20000M
	I0701 04:50:00.702024   10085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 04:50:00.702036   10085 main.go:141] libmachine: STDERR: 
	I0701 04:50:00.702049   10085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2
	I0701 04:50:00.702055   10085 main.go:141] libmachine: Starting QEMU VM...
	I0701 04:50:00.702095   10085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:09:a4:7c:f1:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2
	I0701 04:50:00.703713   10085 main.go:141] libmachine: STDOUT: 
	I0701 04:50:00.703726   10085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:50:00.703746   10085 client.go:171] duration metric: took 815.250208ms to LocalClient.Create
	I0701 04:50:02.705937   10085 start.go:128] duration metric: took 2.844826583s to createHost
	I0701 04:50:02.706004   10085 start.go:83] releasing machines lock for "addons-711000", held for 2.844952917s
	W0701 04:50:02.706097   10085 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:02.718347   10085 out.go:177] * Deleting "addons-711000" in qemu2 ...
	W0701 04:50:02.744273   10085 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:02.744321   10085 start.go:728] Will try again in 5 seconds ...
	I0701 04:50:07.746569   10085 start.go:360] acquireMachinesLock for addons-711000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:50:07.747057   10085 start.go:364] duration metric: took 388.417µs to acquireMachinesLock for "addons-711000"
	I0701 04:50:07.747190   10085 start.go:93] Provisioning new machine with config: &{Name:addons-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 04:50:07.747503   10085 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 04:50:07.759035   10085 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0701 04:50:07.809196   10085 start.go:159] libmachine.API.Create for "addons-711000" (driver="qemu2")
	I0701 04:50:07.809242   10085 client.go:168] LocalClient.Create starting
	I0701 04:50:07.809372   10085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 04:50:07.809440   10085 main.go:141] libmachine: Decoding PEM data...
	I0701 04:50:07.809456   10085 main.go:141] libmachine: Parsing certificate...
	I0701 04:50:07.809552   10085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 04:50:07.809600   10085 main.go:141] libmachine: Decoding PEM data...
	I0701 04:50:07.809624   10085 main.go:141] libmachine: Parsing certificate...
	I0701 04:50:07.810232   10085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 04:50:07.950822   10085 main.go:141] libmachine: Creating SSH key...
	I0701 04:50:08.044456   10085 main.go:141] libmachine: Creating Disk image...
	I0701 04:50:08.044461   10085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 04:50:08.044630   10085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2
	I0701 04:50:08.053788   10085 main.go:141] libmachine: STDOUT: 
	I0701 04:50:08.053805   10085 main.go:141] libmachine: STDERR: 
	I0701 04:50:08.053859   10085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2 +20000M
	I0701 04:50:08.061644   10085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 04:50:08.061659   10085 main.go:141] libmachine: STDERR: 
	I0701 04:50:08.061669   10085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2
	I0701 04:50:08.061672   10085 main.go:141] libmachine: Starting QEMU VM...
	I0701 04:50:08.061712   10085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:93:2a:93:2b:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/addons-711000/disk.qcow2
	I0701 04:50:08.063288   10085 main.go:141] libmachine: STDOUT: 
	I0701 04:50:08.063301   10085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:50:08.063314   10085 client.go:171] duration metric: took 254.061583ms to LocalClient.Create
	I0701 04:50:10.065543   10085 start.go:128] duration metric: took 2.317946167s to createHost
	I0701 04:50:10.065601   10085 start.go:83] releasing machines lock for "addons-711000", held for 2.318480583s
	W0701 04:50:10.065944   10085 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:10.075490   10085 out.go:177] 
	W0701 04:50:10.080556   10085 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:50:10.080582   10085 out.go:239] * 
	* 
	W0701 04:50:10.083452   10085 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:50:10.090382   10085 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-711000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.36s)

                                                
                                    
x
+
TestCertOptions (9.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-638000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-638000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.675545875s)

                                                
                                                
-- stdout --
	* [cert-options-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-638000" primary control-plane node in "cert-options-638000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-638000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-638000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-638000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-638000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-638000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.98525ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-638000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-638000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-638000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-638000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-638000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.653833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-638000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-638000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-638000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-01 05:02:24.260771 -0700 PDT m=+764.839321085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-638000 -n cert-options-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-638000 -n cert-options-638000: exit status 7 (30.573291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-638000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-638000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-638000
--- FAIL: TestCertOptions (9.93s)

                                                
                                    
x
+
TestCertExpiration (195.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-556000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-556000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.903710834s)

                                                
                                                
-- stdout --
	* [cert-expiration-556000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-556000" primary control-plane node in "cert-expiration-556000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-556000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-556000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-556000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-556000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.2193215s)

                                                
                                                
-- stdout --
	* [cert-expiration-556000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-556000" primary control-plane node in "cert-expiration-556000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-556000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-556000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-556000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-556000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-556000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-556000" primary control-plane node in "cert-expiration-556000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-556000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-556000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-556000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-01 05:05:24.342339 -0700 PDT m=+944.921650293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-556000 -n cert-expiration-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-556000 -n cert-expiration-556000: exit status 7 (43.763875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-556000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-556000
--- FAIL: TestCertExpiration (195.25s)

                                                
                                    
x
+
TestDockerFlags (10.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-122000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-122000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.991622541s)

                                                
                                                
-- stdout --
	* [docker-flags-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-122000" primary control-plane node in "docker-flags-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:02:04.247936   11675 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:02:04.248075   11675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:02:04.248078   11675 out.go:304] Setting ErrFile to fd 2...
	I0701 05:02:04.248081   11675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:02:04.248201   11675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:02:04.249263   11675 out.go:298] Setting JSON to false
	I0701 05:02:04.265202   11675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7293,"bootTime":1719828031,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:02:04.265267   11675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:02:04.269200   11675 out.go:177] * [docker-flags-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:02:04.277167   11675 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:02:04.277199   11675 notify.go:220] Checking for updates...
	I0701 05:02:04.284027   11675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:02:04.287061   11675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:02:04.290079   11675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:02:04.292967   11675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:02:04.296091   11675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:02:04.299376   11675 config.go:182] Loaded profile config "force-systemd-flag-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:02:04.299448   11675 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:02:04.299508   11675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:02:04.303027   11675 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:02:04.310060   11675 start.go:297] selected driver: qemu2
	I0701 05:02:04.310065   11675 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:02:04.310072   11675 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:02:04.312360   11675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:02:04.316065   11675 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:02:04.319112   11675 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0701 05:02:04.319125   11675 cni.go:84] Creating CNI manager for ""
	I0701 05:02:04.319135   11675 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:02:04.319139   11675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:02:04.319166   11675 start.go:340] cluster config:
	{Name:docker-flags-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:02:04.322820   11675 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:02:04.330005   11675 out.go:177] * Starting "docker-flags-122000" primary control-plane node in "docker-flags-122000" cluster
	I0701 05:02:04.334094   11675 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:02:04.334113   11675 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:02:04.334124   11675 cache.go:56] Caching tarball of preloaded images
	I0701 05:02:04.334191   11675 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:02:04.334197   11675 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:02:04.334259   11675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/docker-flags-122000/config.json ...
	I0701 05:02:04.334270   11675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/docker-flags-122000/config.json: {Name:mk033e054d398013b4c63450b3595a81e1ad5724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:02:04.334487   11675 start.go:360] acquireMachinesLock for docker-flags-122000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:02:04.334522   11675 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "docker-flags-122000"
	I0701 05:02:04.334535   11675 start.go:93] Provisioning new machine with config: &{Name:docker-flags-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:02:04.334574   11675 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:02:04.342035   11675 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:02:04.359641   11675 start.go:159] libmachine.API.Create for "docker-flags-122000" (driver="qemu2")
	I0701 05:02:04.359669   11675 client.go:168] LocalClient.Create starting
	I0701 05:02:04.359742   11675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:02:04.359773   11675 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:04.359782   11675 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:04.359828   11675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:02:04.359851   11675 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:04.359860   11675 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:04.360208   11675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:02:04.496222   11675 main.go:141] libmachine: Creating SSH key...
	I0701 05:02:04.669210   11675 main.go:141] libmachine: Creating Disk image...
	I0701 05:02:04.669217   11675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:02:04.669381   11675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2
	I0701 05:02:04.678729   11675 main.go:141] libmachine: STDOUT: 
	I0701 05:02:04.678748   11675 main.go:141] libmachine: STDERR: 
	I0701 05:02:04.678790   11675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2 +20000M
	I0701 05:02:04.686603   11675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:02:04.686615   11675 main.go:141] libmachine: STDERR: 
	I0701 05:02:04.686625   11675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2
	I0701 05:02:04.686629   11675 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:02:04.686667   11675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:fa:d9:c5:97:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2
	I0701 05:02:04.688299   11675 main.go:141] libmachine: STDOUT: 
	I0701 05:02:04.688357   11675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:02:04.688375   11675 client.go:171] duration metric: took 328.702042ms to LocalClient.Create
	I0701 05:02:06.690531   11675 start.go:128] duration metric: took 2.355949625s to createHost
	I0701 05:02:06.690579   11675 start.go:83] releasing machines lock for "docker-flags-122000", held for 2.356056125s
	W0701 05:02:06.690623   11675 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:06.709641   11675 out.go:177] * Deleting "docker-flags-122000" in qemu2 ...
	W0701 05:02:06.726213   11675 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:06.726236   11675 start.go:728] Will try again in 5 seconds ...
	I0701 05:02:11.728419   11675 start.go:360] acquireMachinesLock for docker-flags-122000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:02:11.728887   11675 start.go:364] duration metric: took 327.625µs to acquireMachinesLock for "docker-flags-122000"
	I0701 05:02:11.729045   11675 start.go:93] Provisioning new machine with config: &{Name:docker-flags-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:02:11.729269   11675 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:02:11.738583   11675 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:02:11.789780   11675 start.go:159] libmachine.API.Create for "docker-flags-122000" (driver="qemu2")
	I0701 05:02:11.789842   11675 client.go:168] LocalClient.Create starting
	I0701 05:02:11.789955   11675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:02:11.790015   11675 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:11.790033   11675 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:11.790094   11675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:02:11.790140   11675 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:11.790152   11675 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:11.790691   11675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:02:11.933686   11675 main.go:141] libmachine: Creating SSH key...
	I0701 05:02:12.145638   11675 main.go:141] libmachine: Creating Disk image...
	I0701 05:02:12.145650   11675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:02:12.145838   11675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2
	I0701 05:02:12.155607   11675 main.go:141] libmachine: STDOUT: 
	I0701 05:02:12.155626   11675 main.go:141] libmachine: STDERR: 
	I0701 05:02:12.155676   11675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2 +20000M
	I0701 05:02:12.163565   11675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:02:12.163592   11675 main.go:141] libmachine: STDERR: 
	I0701 05:02:12.163605   11675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2
	I0701 05:02:12.163612   11675 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:02:12.163656   11675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:05:17:7f:26:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/docker-flags-122000/disk.qcow2
	I0701 05:02:12.165284   11675 main.go:141] libmachine: STDOUT: 
	I0701 05:02:12.165298   11675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:02:12.165314   11675 client.go:171] duration metric: took 375.468791ms to LocalClient.Create
	I0701 05:02:14.167478   11675 start.go:128] duration metric: took 2.438189416s to createHost
	I0701 05:02:14.167553   11675 start.go:83] releasing machines lock for "docker-flags-122000", held for 2.438648459s
	W0701 05:02:14.168015   11675 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:14.179592   11675 out.go:177] 
	W0701 05:02:14.183767   11675 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:02:14.183797   11675 out.go:239] * 
	* 
	W0701 05:02:14.186649   11675 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:02:14.196658   11675 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-122000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-122000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-122000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (73.301833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-122000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-122000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-122000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-122000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-122000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-122000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-122000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-122000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-122000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.223ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-122000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-122000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-122000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-122000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-122000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-122000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-01 05:02:14.331633 -0700 PDT m=+754.910140751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-122000 -n docker-flags-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-122000 -n docker-flags-122000: exit status 7 (29.075166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-122000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-122000
--- FAIL: TestDockerFlags (10.21s)

                                                
                                    
x
+
TestForceSystemdFlag (10.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-972000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-972000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.855824083s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-972000" primary control-plane node in "force-systemd-flag-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:01:59.200980   11652 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:01:59.201129   11652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:59.201132   11652 out.go:304] Setting ErrFile to fd 2...
	I0701 05:01:59.201134   11652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:59.201251   11652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:01:59.202340   11652 out.go:298] Setting JSON to false
	I0701 05:01:59.219410   11652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7288,"bootTime":1719828031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:01:59.219486   11652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:01:59.225299   11652 out.go:177] * [force-systemd-flag-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:01:59.230280   11652 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:01:59.230305   11652 notify.go:220] Checking for updates...
	I0701 05:01:59.237285   11652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:01:59.240261   11652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:01:59.243268   11652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:01:59.246230   11652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:01:59.249284   11652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:01:59.252584   11652 config.go:182] Loaded profile config "force-systemd-env-076000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:01:59.252663   11652 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:01:59.252718   11652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:01:59.257266   11652 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:01:59.264200   11652 start.go:297] selected driver: qemu2
	I0701 05:01:59.264205   11652 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:01:59.264212   11652 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:01:59.266361   11652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:01:59.269215   11652 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:01:59.272363   11652 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 05:01:59.272401   11652 cni.go:84] Creating CNI manager for ""
	I0701 05:01:59.272410   11652 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:01:59.272414   11652 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:01:59.272458   11652 start.go:340] cluster config:
	{Name:force-systemd-flag-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:01:59.275978   11652 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:59.281197   11652 out.go:177] * Starting "force-systemd-flag-972000" primary control-plane node in "force-systemd-flag-972000" cluster
	I0701 05:01:59.285204   11652 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:01:59.285218   11652 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:01:59.285226   11652 cache.go:56] Caching tarball of preloaded images
	I0701 05:01:59.285281   11652 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:01:59.285287   11652 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:01:59.285336   11652 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/force-systemd-flag-972000/config.json ...
	I0701 05:01:59.285347   11652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/force-systemd-flag-972000/config.json: {Name:mk93fb0c2af0cc4203fabaaebbed9182ab2ea123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:01:59.285699   11652 start.go:360] acquireMachinesLock for force-systemd-flag-972000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:01:59.285743   11652 start.go:364] duration metric: took 35.75µs to acquireMachinesLock for "force-systemd-flag-972000"
	I0701 05:01:59.285757   11652 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:01:59.285837   11652 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:01:59.294268   11652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:01:59.311220   11652 start.go:159] libmachine.API.Create for "force-systemd-flag-972000" (driver="qemu2")
	I0701 05:01:59.311243   11652 client.go:168] LocalClient.Create starting
	I0701 05:01:59.311298   11652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:01:59.311331   11652 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:59.311344   11652 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:59.311382   11652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:01:59.311410   11652 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:59.311417   11652 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:59.311838   11652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:01:59.439782   11652 main.go:141] libmachine: Creating SSH key...
	I0701 05:01:59.597853   11652 main.go:141] libmachine: Creating Disk image...
	I0701 05:01:59.597861   11652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:01:59.598042   11652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2
	I0701 05:01:59.607540   11652 main.go:141] libmachine: STDOUT: 
	I0701 05:01:59.607555   11652 main.go:141] libmachine: STDERR: 
	I0701 05:01:59.607615   11652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2 +20000M
	I0701 05:01:59.615404   11652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:01:59.615418   11652 main.go:141] libmachine: STDERR: 
	I0701 05:01:59.615433   11652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2
	I0701 05:01:59.615440   11652 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:01:59.615468   11652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:68:00:25:a5:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2
	I0701 05:01:59.617077   11652 main.go:141] libmachine: STDOUT: 
	I0701 05:01:59.617091   11652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:01:59.617109   11652 client.go:171] duration metric: took 305.862875ms to LocalClient.Create
	I0701 05:02:01.619317   11652 start.go:128] duration metric: took 2.333460666s to createHost
	I0701 05:02:01.619477   11652 start.go:83] releasing machines lock for "force-systemd-flag-972000", held for 2.333681125s
	W0701 05:02:01.619558   11652 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:01.638843   11652 out.go:177] * Deleting "force-systemd-flag-972000" in qemu2 ...
	W0701 05:02:01.656140   11652 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:01.656183   11652 start.go:728] Will try again in 5 seconds ...
	I0701 05:02:06.658386   11652 start.go:360] acquireMachinesLock for force-systemd-flag-972000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:02:06.690657   11652 start.go:364] duration metric: took 32.15975ms to acquireMachinesLock for "force-systemd-flag-972000"
	I0701 05:02:06.690833   11652 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:02:06.691132   11652 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:02:06.700787   11652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:02:06.749495   11652 start.go:159] libmachine.API.Create for "force-systemd-flag-972000" (driver="qemu2")
	I0701 05:02:06.749543   11652 client.go:168] LocalClient.Create starting
	I0701 05:02:06.749668   11652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:02:06.749737   11652 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:06.749754   11652 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:06.749811   11652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:02:06.749854   11652 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:06.749871   11652 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:06.750432   11652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:02:06.893360   11652 main.go:141] libmachine: Creating SSH key...
	I0701 05:02:06.963423   11652 main.go:141] libmachine: Creating Disk image...
	I0701 05:02:06.963428   11652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:02:06.963588   11652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2
	I0701 05:02:06.972616   11652 main.go:141] libmachine: STDOUT: 
	I0701 05:02:06.972634   11652 main.go:141] libmachine: STDERR: 
	I0701 05:02:06.972691   11652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2 +20000M
	I0701 05:02:06.980412   11652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:02:06.980426   11652 main.go:141] libmachine: STDERR: 
	I0701 05:02:06.980438   11652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2
	I0701 05:02:06.980441   11652 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:02:06.980475   11652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:6b:0e:29:56:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-flag-972000/disk.qcow2
	I0701 05:02:06.982088   11652 main.go:141] libmachine: STDOUT: 
	I0701 05:02:06.982103   11652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:02:06.982115   11652 client.go:171] duration metric: took 232.5685ms to LocalClient.Create
	I0701 05:02:08.984279   11652 start.go:128] duration metric: took 2.293131625s to createHost
	I0701 05:02:08.984396   11652 start.go:83] releasing machines lock for "force-systemd-flag-972000", held for 2.293680542s
	W0701 05:02:08.984797   11652 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:08.996539   11652 out.go:177] 
	W0701 05:02:09.003690   11652 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:02:09.003713   11652 out.go:239] * 
	* 
	W0701 05:02:09.006296   11652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:02:09.015359   11652 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-972000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-972000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-972000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.638208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-972000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-972000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-01 05:02:09.110547 -0700 PDT m=+749.689032210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-972000 -n force-systemd-flag-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-972000 -n force-systemd-flag-972000: exit status 7 (33.384042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-972000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-972000
--- FAIL: TestForceSystemdFlag (10.04s)

                                                
                                    
x
+
TestForceSystemdEnv (12.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-076000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-076000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.987876709s)

                                                
                                                
-- stdout --
	* [force-systemd-env-076000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-076000" primary control-plane node in "force-systemd-env-076000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-076000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:01:52.079262   11620 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:01:52.079454   11620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:52.079457   11620 out.go:304] Setting ErrFile to fd 2...
	I0701 05:01:52.079460   11620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:52.079583   11620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:01:52.080634   11620 out.go:298] Setting JSON to false
	I0701 05:01:52.096862   11620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7281,"bootTime":1719828031,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:01:52.096922   11620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:01:52.101240   11620 out.go:177] * [force-systemd-env-076000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:01:52.108116   11620 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:01:52.108146   11620 notify.go:220] Checking for updates...
	I0701 05:01:52.115016   11620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:01:52.118113   11620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:01:52.121147   11620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:01:52.124046   11620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:01:52.127064   11620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0701 05:01:52.130430   11620 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:01:52.130488   11620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:01:52.134080   11620 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:01:52.141118   11620 start.go:297] selected driver: qemu2
	I0701 05:01:52.141126   11620 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:01:52.141134   11620 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:01:52.143467   11620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:01:52.147199   11620 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:01:52.150143   11620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 05:01:52.150171   11620 cni.go:84] Creating CNI manager for ""
	I0701 05:01:52.150179   11620 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:01:52.150183   11620 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:01:52.150218   11620 start.go:340] cluster config:
	{Name:force-systemd-env-076000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:01:52.153866   11620 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:52.161026   11620 out.go:177] * Starting "force-systemd-env-076000" primary control-plane node in "force-systemd-env-076000" cluster
	I0701 05:01:52.165139   11620 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:01:52.165156   11620 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:01:52.165168   11620 cache.go:56] Caching tarball of preloaded images
	I0701 05:01:52.165240   11620 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:01:52.165246   11620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:01:52.165332   11620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/force-systemd-env-076000/config.json ...
	I0701 05:01:52.165343   11620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/force-systemd-env-076000/config.json: {Name:mk09cd71c59a3757e88431d0bbea8bea2c84cde5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:01:52.165558   11620 start.go:360] acquireMachinesLock for force-systemd-env-076000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:01:52.165593   11620 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "force-systemd-env-076000"
	I0701 05:01:52.165605   11620 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-076000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:01:52.165630   11620 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:01:52.174083   11620 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:01:52.191520   11620 start.go:159] libmachine.API.Create for "force-systemd-env-076000" (driver="qemu2")
	I0701 05:01:52.191553   11620 client.go:168] LocalClient.Create starting
	I0701 05:01:52.191620   11620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:01:52.191655   11620 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:52.191664   11620 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:52.191706   11620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:01:52.191730   11620 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:52.191737   11620 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:52.192129   11620 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:01:52.322009   11620 main.go:141] libmachine: Creating SSH key...
	I0701 05:01:52.557777   11620 main.go:141] libmachine: Creating Disk image...
	I0701 05:01:52.557785   11620 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:01:52.557986   11620 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2
	I0701 05:01:52.567789   11620 main.go:141] libmachine: STDOUT: 
	I0701 05:01:52.567808   11620 main.go:141] libmachine: STDERR: 
	I0701 05:01:52.567869   11620 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2 +20000M
	I0701 05:01:52.575754   11620 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:01:52.575770   11620 main.go:141] libmachine: STDERR: 
	I0701 05:01:52.575785   11620 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2
	I0701 05:01:52.575790   11620 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:01:52.575831   11620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:29:bd:a7:0f:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2
	I0701 05:01:52.577468   11620 main.go:141] libmachine: STDOUT: 
	I0701 05:01:52.577483   11620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:01:52.577501   11620 client.go:171] duration metric: took 385.943ms to LocalClient.Create
	I0701 05:01:54.579629   11620 start.go:128] duration metric: took 2.413993s to createHost
	I0701 05:01:54.579680   11620 start.go:83] releasing machines lock for "force-systemd-env-076000", held for 2.414088583s
	W0701 05:01:54.579745   11620 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:54.586388   11620 out.go:177] * Deleting "force-systemd-env-076000" in qemu2 ...
	W0701 05:01:54.602773   11620 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:54.602800   11620 start.go:728] Will try again in 5 seconds ...
	I0701 05:01:59.603470   11620 start.go:360] acquireMachinesLock for force-systemd-env-076000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:02:01.619641   11620 start.go:364] duration metric: took 2.016146042s to acquireMachinesLock for "force-systemd-env-076000"
	I0701 05:02:01.619843   11620 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-076000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:02:01.620071   11620 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:02:01.629843   11620 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 05:02:01.681527   11620 start.go:159] libmachine.API.Create for "force-systemd-env-076000" (driver="qemu2")
	I0701 05:02:01.681581   11620 client.go:168] LocalClient.Create starting
	I0701 05:02:01.681727   11620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:02:01.681790   11620 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:01.681806   11620 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:01.681869   11620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:02:01.681914   11620 main.go:141] libmachine: Decoding PEM data...
	I0701 05:02:01.681924   11620 main.go:141] libmachine: Parsing certificate...
	I0701 05:02:01.682531   11620 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:02:01.823247   11620 main.go:141] libmachine: Creating SSH key...
	I0701 05:02:01.971532   11620 main.go:141] libmachine: Creating Disk image...
	I0701 05:02:01.971538   11620 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:02:01.971729   11620 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2
	I0701 05:02:01.981309   11620 main.go:141] libmachine: STDOUT: 
	I0701 05:02:01.981326   11620 main.go:141] libmachine: STDERR: 
	I0701 05:02:01.981386   11620 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2 +20000M
	I0701 05:02:01.989176   11620 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:02:01.989196   11620 main.go:141] libmachine: STDERR: 
	I0701 05:02:01.989209   11620 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2
	I0701 05:02:01.989213   11620 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:02:01.989245   11620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:7f:38:46:57:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/force-systemd-env-076000/disk.qcow2
	I0701 05:02:01.990801   11620 main.go:141] libmachine: STDOUT: 
	I0701 05:02:01.990817   11620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:02:01.990829   11620 client.go:171] duration metric: took 309.243292ms to LocalClient.Create
	I0701 05:02:03.993150   11620 start.go:128] duration metric: took 2.373015416s to createHost
	I0701 05:02:03.993231   11620 start.go:83] releasing machines lock for "force-systemd-env-076000", held for 2.373544917s
	W0701 05:02:03.993524   11620 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-076000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-076000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:02:04.006237   11620 out.go:177] 
	W0701 05:02:04.011079   11620 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:02:04.011184   11620 out.go:239] * 
	* 
	W0701 05:02:04.014053   11620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:02:04.025143   11620 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-076000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-076000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-076000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.738083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-076000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-076000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-076000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-01 05:02:04.113165 -0700 PDT m=+744.691629376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-076000 -n force-systemd-env-076000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-076000 -n force-systemd-env-076000: exit status 7 (31.1715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-076000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-076000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-076000
--- FAIL: TestForceSystemdEnv (12.17s)

                                                
                                    
x
+
TestErrorSpam/setup (9.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-145000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-145000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 --driver=qemu2 : exit status 80 (9.793568875s)

                                                
                                                
-- stdout --
	* [nospam-145000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-145000" primary control-plane node in "nospam-145000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-145000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-145000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-145000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-145000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-145000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19166
- KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-145000" primary control-plane node in "nospam-145000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-145000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-145000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.80s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-750000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-750000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.881579334s)

                                                
                                                
-- stdout --
	* [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-750000" primary control-plane node in "functional-750000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51971 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51971 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51971 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-750000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19166
- KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-750000" primary control-plane node in "functional-750000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-750000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51971 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51971 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51971 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (72.014792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-750000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-750000 --alsologtostderr -v=8: exit status 80 (5.193860958s)

                                                
                                                
-- stdout --
	* [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-750000" primary control-plane node in "functional-750000" cluster
	* Restarting existing qemu2 VM for "functional-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:50:40.586117   10229 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:50:40.586259   10229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:50:40.586262   10229 out.go:304] Setting ErrFile to fd 2...
	I0701 04:50:40.586265   10229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:50:40.586407   10229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:50:40.587435   10229 out.go:298] Setting JSON to false
	I0701 04:50:40.603612   10229 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6609,"bootTime":1719828031,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:50:40.603682   10229 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:50:40.608867   10229 out.go:177] * [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:50:40.615849   10229 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:50:40.615904   10229 notify.go:220] Checking for updates...
	I0701 04:50:40.623767   10229 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:50:40.630728   10229 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:50:40.633804   10229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:50:40.636818   10229 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:50:40.639695   10229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:50:40.643054   10229 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:50:40.643109   10229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:50:40.647770   10229 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 04:50:40.654866   10229 start.go:297] selected driver: qemu2
	I0701 04:50:40.654872   10229 start.go:901] validating driver "qemu2" against &{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:50:40.654933   10229 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:50:40.657287   10229 cni.go:84] Creating CNI manager for ""
	I0701 04:50:40.657306   10229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 04:50:40.657367   10229 start.go:340] cluster config:
	{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:50:40.661142   10229 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:50:40.669789   10229 out.go:177] * Starting "functional-750000" primary control-plane node in "functional-750000" cluster
	I0701 04:50:40.673807   10229 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:50:40.673824   10229 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:50:40.673838   10229 cache.go:56] Caching tarball of preloaded images
	I0701 04:50:40.673913   10229 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:50:40.673920   10229 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:50:40.674024   10229 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/functional-750000/config.json ...
	I0701 04:50:40.674540   10229 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:50:40.674582   10229 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "functional-750000"
	I0701 04:50:40.674592   10229 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:50:40.674598   10229 fix.go:54] fixHost starting: 
	I0701 04:50:40.674729   10229 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
	W0701 04:50:40.674738   10229 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:50:40.683809   10229 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
	I0701 04:50:40.687780   10229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
	I0701 04:50:40.689982   10229 main.go:141] libmachine: STDOUT: 
	I0701 04:50:40.690004   10229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:50:40.690034   10229 fix.go:56] duration metric: took 15.435042ms for fixHost
	I0701 04:50:40.690040   10229 start.go:83] releasing machines lock for "functional-750000", held for 15.453583ms
	W0701 04:50:40.690047   10229 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:50:40.690079   10229 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:40.690084   10229 start.go:728] Will try again in 5 seconds ...
	I0701 04:50:45.692297   10229 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:50:45.692783   10229 start.go:364] duration metric: took 344µs to acquireMachinesLock for "functional-750000"
	I0701 04:50:45.692925   10229 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:50:45.692946   10229 fix.go:54] fixHost starting: 
	I0701 04:50:45.693756   10229 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
	W0701 04:50:45.693794   10229 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:50:45.702149   10229 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
	I0701 04:50:45.706424   10229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
	I0701 04:50:45.715555   10229 main.go:141] libmachine: STDOUT: 
	I0701 04:50:45.715617   10229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:50:45.715704   10229 fix.go:56] duration metric: took 22.756417ms for fixHost
	I0701 04:50:45.715726   10229 start.go:83] releasing machines lock for "functional-750000", held for 22.918875ms
	W0701 04:50:45.715917   10229 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:45.723240   10229 out.go:177] 
	W0701 04:50:45.727295   10229 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:50:45.727427   10229 out.go:239] * 
	* 
	W0701 04:50:45.730230   10229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:50:45.737172   10229 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-750000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.195616542s for "functional-750000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (69.510875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.766667ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-750000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.489959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-750000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-750000 get po -A: exit status 1 (26.233208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-750000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-750000\n"*: args "kubectl --context functional-750000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-750000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.454125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl images: exit status 83 (42.856042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.6195ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-750000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.7995ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.871542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-750000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 kubectl -- --context functional-750000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 kubectl -- --context functional-750000 get pods: exit status 1 (603.283166ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-750000
	* no server found for cluster "functional-750000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-750000 kubectl -- --context functional-750000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (32.5145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-750000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-750000 get pods: exit status 1 (930.634583ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-750000
	* no server found for cluster "functional-750000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-750000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (29.505584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-750000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-750000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.181596291s)

                                                
                                                
-- stdout --
	* [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-750000" primary control-plane node in "functional-750000" cluster
	* Restarting existing qemu2 VM for "functional-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-750000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.182139834s for "functional-750000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (69.879041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-750000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-750000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.237208ms)

                                                
                                                
** stderr ** 
	error: context "functional-750000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-750000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.106666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 logs: exit status 83 (76.589042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | -p download-only-666000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| delete  | -p download-only-666000                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| start   | -o=json --download-only                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | -p download-only-897000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| delete  | -p download-only-897000                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| delete  | -p download-only-666000                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| delete  | -p download-only-897000                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| start   | --download-only -p                                                       | binary-mirror-076000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | binary-mirror-076000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51936                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-076000                                                  | binary-mirror-076000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| addons  | enable dashboard -p                                                      | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | addons-711000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | addons-711000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-711000 --wait=true                                             | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-711000                                                         | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	| start   | -p nospam-145000 -n=1 --memory=2250 --wait=false                         | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-145000                                                         | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | minikube-local-cache-test:functional-750000                              |                      |         |         |                     |                     |
	| cache   | functional-750000 cache delete                                           | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | minikube-local-cache-test:functional-750000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	| ssh     | functional-750000 ssh sudo                                               | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-750000                                                        | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-750000 ssh                                                    | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-750000 cache reload                                           | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	| ssh     | functional-750000 ssh                                                    | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-750000 kubectl --                                             | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | --context functional-750000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 04:50:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 04:50:50.675749   10310 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:50:50.675885   10310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:50:50.675887   10310 out.go:304] Setting ErrFile to fd 2...
	I0701 04:50:50.675888   10310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:50:50.676010   10310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:50:50.677031   10310 out.go:298] Setting JSON to false
	I0701 04:50:50.692822   10310 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6619,"bootTime":1719828031,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:50:50.692884   10310 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:50:50.698612   10310 out.go:177] * [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:50:50.707611   10310 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:50:50.707652   10310 notify.go:220] Checking for updates...
	I0701 04:50:50.714535   10310 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:50:50.717521   10310 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:50:50.720581   10310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:50:50.723513   10310 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:50:50.726541   10310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:50:50.729866   10310 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:50:50.729913   10310 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:50:50.734471   10310 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 04:50:50.741578   10310 start.go:297] selected driver: qemu2
	I0701 04:50:50.741583   10310 start.go:901] validating driver "qemu2" against &{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:50:50.741641   10310 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:50:50.743930   10310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 04:50:50.743961   10310 cni.go:84] Creating CNI manager for ""
	I0701 04:50:50.743969   10310 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 04:50:50.744010   10310 start.go:340] cluster config:
	{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:50:50.747582   10310 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:50:50.755516   10310 out.go:177] * Starting "functional-750000" primary control-plane node in "functional-750000" cluster
	I0701 04:50:50.759519   10310 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:50:50.759529   10310 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:50:50.759535   10310 cache.go:56] Caching tarball of preloaded images
	I0701 04:50:50.759583   10310 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:50:50.759587   10310 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:50:50.759639   10310 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/functional-750000/config.json ...
	I0701 04:50:50.760087   10310 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:50:50.760119   10310 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "functional-750000"
	I0701 04:50:50.760126   10310 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:50:50.760130   10310 fix.go:54] fixHost starting: 
	I0701 04:50:50.760240   10310 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
	W0701 04:50:50.760246   10310 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:50:50.768532   10310 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
	I0701 04:50:50.772616   10310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
	I0701 04:50:50.774536   10310 main.go:141] libmachine: STDOUT: 
	I0701 04:50:50.774550   10310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:50:50.774578   10310 fix.go:56] duration metric: took 14.447542ms for fixHost
	I0701 04:50:50.774582   10310 start.go:83] releasing machines lock for "functional-750000", held for 14.460959ms
	W0701 04:50:50.774586   10310 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:50:50.774625   10310 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:50.774636   10310 start.go:728] Will try again in 5 seconds ...
	I0701 04:50:55.776807   10310 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:50:55.777239   10310 start.go:364] duration metric: took 355.417µs to acquireMachinesLock for "functional-750000"
	I0701 04:50:55.777404   10310 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:50:55.777414   10310 fix.go:54] fixHost starting: 
	I0701 04:50:55.778183   10310 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
	W0701 04:50:55.778204   10310 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:50:55.781813   10310 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
	I0701 04:50:55.786684   10310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
	I0701 04:50:55.796442   10310 main.go:141] libmachine: STDOUT: 
	I0701 04:50:55.796517   10310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:50:55.796597   10310 fix.go:56] duration metric: took 19.184375ms for fixHost
	I0701 04:50:55.796613   10310 start.go:83] releasing machines lock for "functional-750000", held for 19.3565ms
	W0701 04:50:55.796796   10310 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:50:55.803725   10310 out.go:177] 
	W0701 04:50:55.807669   10310 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:50:55.807683   10310 out.go:239] * 
	W0701 04:50:55.809606   10310 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:50:55.817653   10310 out.go:177] 
	
	
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-750000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | -p download-only-666000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-666000                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| start   | -o=json --download-only                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | -p download-only-897000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-897000                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-666000                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-897000                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-076000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | binary-mirror-076000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51936                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-076000                                                  | binary-mirror-076000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| addons  | enable dashboard -p                                                      | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | addons-711000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | addons-711000                                                            |                      |         |         |                     |                     |
| start   | -p addons-711000 --wait=true                                             | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-711000                                                         | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| start   | -p nospam-145000 -n=1 --memory=2250 --wait=false                         | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-145000                                                         | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | minikube-local-cache-test:functional-750000                              |                      |         |         |                     |                     |
| cache   | functional-750000 cache delete                                           | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | minikube-local-cache-test:functional-750000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| ssh     | functional-750000 ssh sudo                                               | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-750000                                                        | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-750000 ssh                                                    | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-750000 cache reload                                           | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| ssh     | functional-750000 ssh                                                    | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-750000 kubectl --                                             | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --context functional-750000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/01 04:50:50
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 04:50:50.675749   10310 out.go:291] Setting OutFile to fd 1 ...
I0701 04:50:50.675885   10310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:50:50.675887   10310 out.go:304] Setting ErrFile to fd 2...
I0701 04:50:50.675888   10310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:50:50.676010   10310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:50:50.677031   10310 out.go:298] Setting JSON to false
I0701 04:50:50.692822   10310 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6619,"bootTime":1719828031,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0701 04:50:50.692884   10310 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0701 04:50:50.698612   10310 out.go:177] * [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0701 04:50:50.707611   10310 out.go:177]   - MINIKUBE_LOCATION=19166
I0701 04:50:50.707652   10310 notify.go:220] Checking for updates...
I0701 04:50:50.714535   10310 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
I0701 04:50:50.717521   10310 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0701 04:50:50.720581   10310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 04:50:50.723513   10310 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
I0701 04:50:50.726541   10310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0701 04:50:50.729866   10310 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:50:50.729913   10310 driver.go:392] Setting default libvirt URI to qemu:///system
I0701 04:50:50.734471   10310 out.go:177] * Using the qemu2 driver based on existing profile
I0701 04:50:50.741578   10310 start.go:297] selected driver: qemu2
I0701 04:50:50.741583   10310 start.go:901] validating driver "qemu2" against &{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 04:50:50.741641   10310 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 04:50:50.743930   10310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0701 04:50:50.743961   10310 cni.go:84] Creating CNI manager for ""
I0701 04:50:50.743969   10310 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0701 04:50:50.744010   10310 start.go:340] cluster config:
{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 04:50:50.747582   10310 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 04:50:50.755516   10310 out.go:177] * Starting "functional-750000" primary control-plane node in "functional-750000" cluster
I0701 04:50:50.759519   10310 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 04:50:50.759529   10310 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0701 04:50:50.759535   10310 cache.go:56] Caching tarball of preloaded images
I0701 04:50:50.759583   10310 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0701 04:50:50.759587   10310 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0701 04:50:50.759639   10310 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/functional-750000/config.json ...
I0701 04:50:50.760087   10310 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 04:50:50.760119   10310 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "functional-750000"
I0701 04:50:50.760126   10310 start.go:96] Skipping create...Using existing machine configuration
I0701 04:50:50.760130   10310 fix.go:54] fixHost starting: 
I0701 04:50:50.760240   10310 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
W0701 04:50:50.760246   10310 fix.go:138] unexpected machine state, will restart: <nil>
I0701 04:50:50.768532   10310 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
I0701 04:50:50.772616   10310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
I0701 04:50:50.774536   10310 main.go:141] libmachine: STDOUT: 
I0701 04:50:50.774550   10310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0701 04:50:50.774578   10310 fix.go:56] duration metric: took 14.447542ms for fixHost
I0701 04:50:50.774582   10310 start.go:83] releasing machines lock for "functional-750000", held for 14.460959ms
W0701 04:50:50.774586   10310 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0701 04:50:50.774625   10310 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0701 04:50:50.774636   10310 start.go:728] Will try again in 5 seconds ...
I0701 04:50:55.776807   10310 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 04:50:55.777239   10310 start.go:364] duration metric: took 355.417µs to acquireMachinesLock for "functional-750000"
I0701 04:50:55.777404   10310 start.go:96] Skipping create...Using existing machine configuration
I0701 04:50:55.777414   10310 fix.go:54] fixHost starting: 
I0701 04:50:55.778183   10310 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
W0701 04:50:55.778204   10310 fix.go:138] unexpected machine state, will restart: <nil>
I0701 04:50:55.781813   10310 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
I0701 04:50:55.786684   10310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
I0701 04:50:55.796442   10310 main.go:141] libmachine: STDOUT: 
I0701 04:50:55.796517   10310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0701 04:50:55.796597   10310 fix.go:56] duration metric: took 19.184375ms for fixHost
I0701 04:50:55.796613   10310 start.go:83] releasing machines lock for "functional-750000", held for 19.3565ms
W0701 04:50:55.796796   10310 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0701 04:50:55.803725   10310 out.go:177] 
W0701 04:50:55.807669   10310 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0701 04:50:55.807683   10310 out.go:239] * 
W0701 04:50:55.809606   10310 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 04:50:55.817653   10310 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2714733475/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | -p download-only-666000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-666000                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| start   | -o=json --download-only                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | -p download-only-897000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-897000                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-666000                                                  | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| delete  | -p download-only-897000                                                  | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-076000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | binary-mirror-076000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51936                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-076000                                                  | binary-mirror-076000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
| addons  | enable dashboard -p                                                      | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | addons-711000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | addons-711000                                                            |                      |         |         |                     |                     |
| start   | -p addons-711000 --wait=true                                             | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-711000                                                         | addons-711000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| start   | -p nospam-145000 -n=1 --memory=2250 --wait=false                         | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-145000 --log_dir                                                  | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-145000                                                         | nospam-145000        | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-750000 cache add                                              | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | minikube-local-cache-test:functional-750000                              |                      |         |         |                     |                     |
| cache   | functional-750000 cache delete                                           | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | minikube-local-cache-test:functional-750000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| ssh     | functional-750000 ssh sudo                                               | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-750000                                                        | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-750000 ssh                                                    | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-750000 cache reload                                           | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
| ssh     | functional-750000 ssh                                                    | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT | 01 Jul 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-750000 kubectl --                                             | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --context functional-750000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-750000                                                     | functional-750000    | jenkins | v1.33.1 | 01 Jul 24 04:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/01 04:50:50
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 04:50:50.675749   10310 out.go:291] Setting OutFile to fd 1 ...
I0701 04:50:50.675885   10310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:50:50.675887   10310 out.go:304] Setting ErrFile to fd 2...
I0701 04:50:50.675888   10310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:50:50.676010   10310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:50:50.677031   10310 out.go:298] Setting JSON to false
I0701 04:50:50.692822   10310 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6619,"bootTime":1719828031,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0701 04:50:50.692884   10310 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0701 04:50:50.698612   10310 out.go:177] * [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0701 04:50:50.707611   10310 out.go:177]   - MINIKUBE_LOCATION=19166
I0701 04:50:50.707652   10310 notify.go:220] Checking for updates...
I0701 04:50:50.714535   10310 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
I0701 04:50:50.717521   10310 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0701 04:50:50.720581   10310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 04:50:50.723513   10310 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
I0701 04:50:50.726541   10310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0701 04:50:50.729866   10310 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:50:50.729913   10310 driver.go:392] Setting default libvirt URI to qemu:///system
I0701 04:50:50.734471   10310 out.go:177] * Using the qemu2 driver based on existing profile
I0701 04:50:50.741578   10310 start.go:297] selected driver: qemu2
I0701 04:50:50.741583   10310 start.go:901] validating driver "qemu2" against &{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 04:50:50.741641   10310 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 04:50:50.743930   10310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0701 04:50:50.743961   10310 cni.go:84] Creating CNI manager for ""
I0701 04:50:50.743969   10310 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0701 04:50:50.744010   10310 start.go:340] cluster config:
{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 04:50:50.747582   10310 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 04:50:50.755516   10310 out.go:177] * Starting "functional-750000" primary control-plane node in "functional-750000" cluster
I0701 04:50:50.759519   10310 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 04:50:50.759529   10310 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0701 04:50:50.759535   10310 cache.go:56] Caching tarball of preloaded images
I0701 04:50:50.759583   10310 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0701 04:50:50.759587   10310 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0701 04:50:50.759639   10310 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/functional-750000/config.json ...
I0701 04:50:50.760087   10310 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 04:50:50.760119   10310 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "functional-750000"
I0701 04:50:50.760126   10310 start.go:96] Skipping create...Using existing machine configuration
I0701 04:50:50.760130   10310 fix.go:54] fixHost starting: 
I0701 04:50:50.760240   10310 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
W0701 04:50:50.760246   10310 fix.go:138] unexpected machine state, will restart: <nil>
I0701 04:50:50.768532   10310 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
I0701 04:50:50.772616   10310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
I0701 04:50:50.774536   10310 main.go:141] libmachine: STDOUT: 
I0701 04:50:50.774550   10310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0701 04:50:50.774578   10310 fix.go:56] duration metric: took 14.447542ms for fixHost
I0701 04:50:50.774582   10310 start.go:83] releasing machines lock for "functional-750000", held for 14.460959ms
W0701 04:50:50.774586   10310 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0701 04:50:50.774625   10310 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0701 04:50:50.774636   10310 start.go:728] Will try again in 5 seconds ...
I0701 04:50:55.776807   10310 start.go:360] acquireMachinesLock for functional-750000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 04:50:55.777239   10310 start.go:364] duration metric: took 355.417µs to acquireMachinesLock for "functional-750000"
I0701 04:50:55.777404   10310 start.go:96] Skipping create...Using existing machine configuration
I0701 04:50:55.777414   10310 fix.go:54] fixHost starting: 
I0701 04:50:55.778183   10310 fix.go:112] recreateIfNeeded on functional-750000: state=Stopped err=<nil>
W0701 04:50:55.778204   10310 fix.go:138] unexpected machine state, will restart: <nil>
I0701 04:50:55.781813   10310 out.go:177] * Restarting existing qemu2 VM for "functional-750000" ...
I0701 04:50:55.786684   10310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:52:87:b2:a7:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/functional-750000/disk.qcow2
I0701 04:50:55.796442   10310 main.go:141] libmachine: STDOUT: 
I0701 04:50:55.796517   10310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0701 04:50:55.796597   10310 fix.go:56] duration metric: took 19.184375ms for fixHost
I0701 04:50:55.796613   10310 start.go:83] releasing machines lock for "functional-750000", held for 19.3565ms
W0701 04:50:55.796796   10310 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0701 04:50:55.803725   10310 out.go:177] 
W0701 04:50:55.807669   10310 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0701 04:50:55.807683   10310 out.go:239] * 
W0701 04:50:55.809606   10310 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 04:50:55.817653   10310 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-750000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-750000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.459125ms)

                                                
                                                
** stderr ** 
	error: context "functional-750000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-750000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-750000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-750000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-750000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-750000 --alsologtostderr -v=1] stderr:
I0701 04:51:35.683454   10625 out.go:291] Setting OutFile to fd 1 ...
I0701 04:51:35.684011   10625 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:35.684014   10625 out.go:304] Setting ErrFile to fd 2...
I0701 04:51:35.684017   10625 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:35.684188   10625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:51:35.684394   10625 mustload.go:65] Loading cluster: functional-750000
I0701 04:51:35.684594   10625 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:35.686187   10625 out.go:177] * The control-plane node functional-750000 host is not running: state=Stopped
I0701 04:51:35.690003   10625 out.go:177]   To start a cluster, run: "minikube start -p functional-750000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (40.973792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 status: exit status 7 (29.416166ms)

                                                
                                                
-- stdout --
	functional-750000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-750000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (30.237333ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-750000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 status -o json: exit status 7 (30.557292ms)

                                                
                                                
-- stdout --
	{"Name":"functional-750000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-750000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.35225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-750000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-750000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.452334ms)

                                                
                                                
** stderr ** 
	error: context "functional-750000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-750000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-750000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-750000 describe po hello-node-connect: exit status 1 (26.29325ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-750000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-750000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-750000 logs -l app=hello-node-connect: exit status 1 (26.999875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-750000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-750000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-750000 describe svc hello-node-connect: exit status 1 (26.924917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-750000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.866958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-750000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.682208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "echo hello": exit status 83 (46.7015ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n"*. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "cat /etc/hostname": exit status 83 (45.901792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-750000"- but got *"* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n"*. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (34.775958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.46125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-750000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.867625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-750000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-750000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cp functional-750000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2471003920/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 cp functional-750000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2471003920/001/cp-test.txt: exit status 83 (38.683458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-750000 cp functional-750000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2471003920/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.683792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2471003920/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.777125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-750000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (40.91825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-750000 ssh -n functional-750000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-750000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-750000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10003/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/test/nested/copy/10003/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/test/nested/copy/10003/hosts": exit status 83 (41.114709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/test/nested/copy/10003/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-750000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-750000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.672167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10003.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/10003.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/10003.pem": exit status 83 (39.396417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/10003.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo cat /etc/ssl/certs/10003.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/10003.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-750000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-750000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10003.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /usr/share/ca-certificates/10003.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /usr/share/ca-certificates/10003.pem": exit status 83 (39.760667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/10003.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo cat /usr/share/ca-certificates/10003.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/10003.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-750000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-750000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.570042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-750000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-750000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/100032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/100032.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/100032.pem": exit status 83 (41.666875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/100032.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo cat /etc/ssl/certs/100032.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/100032.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-750000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-750000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/100032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /usr/share/ca-certificates/100032.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /usr/share/ca-certificates/100032.pem": exit status 83 (38.474834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/100032.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo cat /usr/share/ca-certificates/100032.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/100032.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-750000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-750000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.346375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-750000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-750000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (30.746875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-750000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-750000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.938708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-750000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-750000 -n functional-750000: exit status 7 (31.561875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo systemctl is-active crio": exit status 83 (38.553583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 version -o=json --components: exit status 83 (41.945208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-750000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-750000 image ls --format short --alsologtostderr:
I0701 04:51:36.082459   10640 out.go:291] Setting OutFile to fd 1 ...
I0701 04:51:36.082625   10640 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.082628   10640 out.go:304] Setting ErrFile to fd 2...
I0701 04:51:36.082630   10640 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.082770   10640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:51:36.083209   10640 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:36.083268   10640 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-750000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-750000 image ls --format table --alsologtostderr:
I0701 04:51:36.297957   10652 out.go:291] Setting OutFile to fd 1 ...
I0701 04:51:36.298104   10652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.298110   10652 out.go:304] Setting ErrFile to fd 2...
I0701 04:51:36.298112   10652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.298236   10652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:51:36.298676   10652 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:36.298741   10652 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-750000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-750000 image ls --format json --alsologtostderr:
I0701 04:51:36.262938   10650 out.go:291] Setting OutFile to fd 1 ...
I0701 04:51:36.263074   10650 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.263077   10650 out.go:304] Setting ErrFile to fd 2...
I0701 04:51:36.263079   10650 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.263242   10650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:51:36.263673   10650 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:36.263734   10650 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-750000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-750000 image ls --format yaml --alsologtostderr:
I0701 04:51:36.117440   10642 out.go:291] Setting OutFile to fd 1 ...
I0701 04:51:36.117600   10642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.117603   10642 out.go:304] Setting ErrFile to fd 2...
I0701 04:51:36.117606   10642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.117728   10642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:51:36.118217   10642 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:36.118279   10642 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh pgrep buildkitd: exit status 83 (39.824709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image build -t localhost/my-image:functional-750000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-750000 image build -t localhost/my-image:functional-750000 testdata/build --alsologtostderr:
I0701 04:51:36.192008   10646 out.go:291] Setting OutFile to fd 1 ...
I0701 04:51:36.193025   10646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.193031   10646 out.go:304] Setting ErrFile to fd 2...
I0701 04:51:36.193034   10646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:51:36.193178   10646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:51:36.193592   10646 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:36.194036   10646 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:51:36.194270   10646 build_images.go:133] succeeded building to: 
I0701 04:51:36.194273   10646 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls
functional_test.go:442: expected "localhost/my-image:functional-750000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-750000 docker-env) && out/minikube-darwin-arm64 status -p functional-750000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-750000 docker-env) && out/minikube-darwin-arm64 status -p functional-750000": exit status 1 (43.730333ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2: exit status 83 (43.692584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:51:35.955155   10634 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:51:35.956199   10634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.956202   10634 out.go:304] Setting ErrFile to fd 2...
	I0701 04:51:35.956205   10634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.956371   10634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:51:35.956585   10634 mustload.go:65] Loading cluster: functional-750000
	I0701 04:51:35.956781   10634 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:51:35.961309   10634 out.go:177] * The control-plane node functional-750000 host is not running: state=Stopped
	I0701 04:51:35.965302   10634 out.go:177]   To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2: exit status 83 (41.58275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:51:35.998666   10636 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:51:35.998794   10636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.998797   10636 out.go:304] Setting ErrFile to fd 2...
	I0701 04:51:35.998799   10636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.998932   10636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:51:35.999153   10636 mustload.go:65] Loading cluster: functional-750000
	I0701 04:51:35.999336   10636 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:51:36.003397   10636 out.go:177] * The control-plane node functional-750000 host is not running: state=Stopped
	I0701 04:51:36.007116   10636 out.go:177]   To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2: exit status 83 (41.632584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:51:36.040957   10638 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:51:36.041100   10638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:36.041103   10638 out.go:304] Setting ErrFile to fd 2...
	I0701 04:51:36.041105   10638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:36.041230   10638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:51:36.041432   10638 mustload.go:65] Loading cluster: functional-750000
	I0701 04:51:36.041646   10638 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:51:36.045358   10638 out.go:177] * The control-plane node functional-750000 host is not running: state=Stopped
	I0701 04:51:36.049320   10638 out.go:177]   To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-750000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-750000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-750000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.44575ms)

                                                
                                                
** stderr ** 
	error: context "functional-750000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-750000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 service list: exit status 83 (43.788667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-750000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 service list -o json: exit status 83 (42.613667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-750000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 service --namespace=default --https --url hello-node: exit status 83 (41.906667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-750000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 service hello-node --url --format={{.IP}}: exit status 83 (43.957958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-750000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 service hello-node --url: exit status 83 (46.858291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-750000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test.go:1565: failed to parse "* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"": parse "* The control-plane node functional-750000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-750000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0701 04:50:57.634027   10427 out.go:291] Setting OutFile to fd 1 ...
I0701 04:50:57.634229   10427 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:50:57.634234   10427 out.go:304] Setting ErrFile to fd 2...
I0701 04:50:57.634237   10427 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:50:57.634376   10427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:50:57.634603   10427 mustload.go:65] Loading cluster: functional-750000
I0701 04:50:57.634815   10427 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:50:57.639570   10427 out.go:177] * The control-plane node functional-750000 host is not running: state=Stopped
I0701 04:50:57.653584   10427 out.go:177]   To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
stdout: * The control-plane node functional-750000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-750000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 10428: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image load --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-750000 image load --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr: (1.297682041s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-750000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-750000": client config: context "functional-750000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-750000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-750000 get svc nginx-svc: exit status 1 (68.602542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-750000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-750000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image load --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-750000 image load --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr: (1.29834975s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-750000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.207583125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-750000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image load --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-750000 image load --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr: (1.201592292s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-750000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image save gcr.io/google-containers/addon-resizer:functional-750000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-750000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.032917083s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-066000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-066000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.726122334s)

                                                
                                                
-- stdout --
	* [ha-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-066000" primary control-plane node in "ha-066000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:53:34.854992   10684 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:53:34.855132   10684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:53:34.855135   10684 out.go:304] Setting ErrFile to fd 2...
	I0701 04:53:34.855138   10684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:53:34.855276   10684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:53:34.856347   10684 out.go:298] Setting JSON to false
	I0701 04:53:34.872536   10684 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6783,"bootTime":1719828031,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:53:34.872605   10684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:53:34.877182   10684 out.go:177] * [ha-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:53:34.883058   10684 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:53:34.883106   10684 notify.go:220] Checking for updates...
	I0701 04:53:34.889951   10684 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:53:34.893060   10684 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:53:34.896112   10684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:53:34.897352   10684 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:53:34.900125   10684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:53:34.903273   10684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:53:34.906863   10684 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 04:53:34.914042   10684 start.go:297] selected driver: qemu2
	I0701 04:53:34.914048   10684 start.go:901] validating driver "qemu2" against <nil>
	I0701 04:53:34.914055   10684 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:53:34.916330   10684 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 04:53:34.919072   10684 out.go:177] * Automatically selected the socket_vmnet network
	I0701 04:53:34.922139   10684 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 04:53:34.922166   10684 cni.go:84] Creating CNI manager for ""
	I0701 04:53:34.922171   10684 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0701 04:53:34.922174   10684 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 04:53:34.922200   10684 start.go:340] cluster config:
	{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:53:34.926055   10684 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:53:34.932997   10684 out.go:177] * Starting "ha-066000" primary control-plane node in "ha-066000" cluster
	I0701 04:53:34.937080   10684 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:53:34.937095   10684 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:53:34.937102   10684 cache.go:56] Caching tarball of preloaded images
	I0701 04:53:34.937176   10684 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:53:34.937182   10684 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:53:34.937374   10684 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/ha-066000/config.json ...
	I0701 04:53:34.937385   10684 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/ha-066000/config.json: {Name:mkd7c03dc92a24a5c3e939521228c1f63d27d0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 04:53:34.937717   10684 start.go:360] acquireMachinesLock for ha-066000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:53:34.937750   10684 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "ha-066000"
	I0701 04:53:34.937765   10684 start.go:93] Provisioning new machine with config: &{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 04:53:34.937790   10684 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 04:53:34.946019   10684 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 04:53:34.963074   10684 start.go:159] libmachine.API.Create for "ha-066000" (driver="qemu2")
	I0701 04:53:34.963100   10684 client.go:168] LocalClient.Create starting
	I0701 04:53:34.963170   10684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 04:53:34.963201   10684 main.go:141] libmachine: Decoding PEM data...
	I0701 04:53:34.963215   10684 main.go:141] libmachine: Parsing certificate...
	I0701 04:53:34.963250   10684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 04:53:34.963274   10684 main.go:141] libmachine: Decoding PEM data...
	I0701 04:53:34.963282   10684 main.go:141] libmachine: Parsing certificate...
	I0701 04:53:34.963702   10684 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 04:53:35.095089   10684 main.go:141] libmachine: Creating SSH key...
	I0701 04:53:35.148023   10684 main.go:141] libmachine: Creating Disk image...
	I0701 04:53:35.148028   10684 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 04:53:35.148200   10684 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:53:35.157327   10684 main.go:141] libmachine: STDOUT: 
	I0701 04:53:35.157350   10684 main.go:141] libmachine: STDERR: 
	I0701 04:53:35.157399   10684 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2 +20000M
	I0701 04:53:35.165445   10684 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 04:53:35.165458   10684 main.go:141] libmachine: STDERR: 
	I0701 04:53:35.165477   10684 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:53:35.165480   10684 main.go:141] libmachine: Starting QEMU VM...
	I0701 04:53:35.165519   10684 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:1b:12:dd:78:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:53:35.167109   10684 main.go:141] libmachine: STDOUT: 
	I0701 04:53:35.167124   10684 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:53:35.167140   10684 client.go:171] duration metric: took 204.03275ms to LocalClient.Create
	I0701 04:53:37.169325   10684 start.go:128] duration metric: took 2.231498291s to createHost
	I0701 04:53:37.169371   10684 start.go:83] releasing machines lock for "ha-066000", held for 2.231600459s
	W0701 04:53:37.169431   10684 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:53:37.176616   10684 out.go:177] * Deleting "ha-066000" in qemu2 ...
	W0701 04:53:37.202699   10684 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:53:37.202730   10684 start.go:728] Will try again in 5 seconds ...
	I0701 04:53:42.205004   10684 start.go:360] acquireMachinesLock for ha-066000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:53:42.205452   10684 start.go:364] duration metric: took 336.458µs to acquireMachinesLock for "ha-066000"
	I0701 04:53:42.205580   10684 start.go:93] Provisioning new machine with config: &{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 04:53:42.205892   10684 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 04:53:42.219729   10684 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 04:53:42.269622   10684 start.go:159] libmachine.API.Create for "ha-066000" (driver="qemu2")
	I0701 04:53:42.269673   10684 client.go:168] LocalClient.Create starting
	I0701 04:53:42.269777   10684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 04:53:42.269830   10684 main.go:141] libmachine: Decoding PEM data...
	I0701 04:53:42.269843   10684 main.go:141] libmachine: Parsing certificate...
	I0701 04:53:42.269916   10684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 04:53:42.269960   10684 main.go:141] libmachine: Decoding PEM data...
	I0701 04:53:42.269970   10684 main.go:141] libmachine: Parsing certificate...
	I0701 04:53:42.270574   10684 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 04:53:42.412653   10684 main.go:141] libmachine: Creating SSH key...
	I0701 04:53:42.488096   10684 main.go:141] libmachine: Creating Disk image...
	I0701 04:53:42.488102   10684 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 04:53:42.488262   10684 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:53:42.497427   10684 main.go:141] libmachine: STDOUT: 
	I0701 04:53:42.497442   10684 main.go:141] libmachine: STDERR: 
	I0701 04:53:42.497506   10684 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2 +20000M
	I0701 04:53:42.505310   10684 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 04:53:42.505323   10684 main.go:141] libmachine: STDERR: 
	I0701 04:53:42.505335   10684 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:53:42.505341   10684 main.go:141] libmachine: Starting QEMU VM...
	I0701 04:53:42.505375   10684 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0d:2f:af:33:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:53:42.506989   10684 main.go:141] libmachine: STDOUT: 
	I0701 04:53:42.507002   10684 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:53:42.507014   10684 client.go:171] duration metric: took 237.333667ms to LocalClient.Create
	I0701 04:53:44.509190   10684 start.go:128] duration metric: took 2.30322125s to createHost
	I0701 04:53:44.509250   10684 start.go:83] releasing machines lock for "ha-066000", held for 2.303761833s
	W0701 04:53:44.509657   10684 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:53:44.522380   10684 out.go:177] 
	W0701 04:53:44.526418   10684 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:53:44.526451   10684 out.go:239] * 
	* 
	W0701 04:53:44.529011   10684 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:53:44.538371   10684 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-066000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (68.532875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (113.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.554209ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-066000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- rollout status deployment/busybox: exit status 1 (56.710834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.403875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.919667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.968959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.782416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.049458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.464834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.725834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.715208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.749958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.693541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.95175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.388042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.719792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.656958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.099958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.518792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (113.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-066000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.199833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-066000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.421625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-066000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-066000 -v=7 --alsologtostderr: exit status 83 (44.019916ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-066000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-066000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:38.421414   10779 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:38.422017   10779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.422021   10779 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:38.422024   10779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.422187   10779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:38.422417   10779 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:38.422607   10779 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:38.427995   10779 out.go:177] * The control-plane node ha-066000 host is not running: state=Stopped
	I0701 04:55:38.433018   10779 out.go:177]   To start a cluster, run: "minikube start -p ha-066000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-066000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.361166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-066000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-066000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.253333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-066000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-066000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-066000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.83825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-066000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-066000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (29.258625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status --output json -v=7 --alsologtostderr: exit status 7 (30.6835ms)

                                                
                                                
-- stdout --
	{"Name":"ha-066000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:38.631139   10791 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:38.631311   10791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.631314   10791 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:38.631317   10791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.631460   10791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:38.631591   10791 out.go:298] Setting JSON to true
	I0701 04:55:38.631605   10791 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:38.631672   10791 notify.go:220] Checking for updates...
	I0701 04:55:38.631797   10791 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:38.631803   10791 status.go:255] checking status of ha-066000 ...
	I0701 04:55:38.632021   10791 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:38.632025   10791 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:38.632027   10791 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-066000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.451625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.441542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:38.692507   10795 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:38.693075   10795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.693079   10795 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:38.693081   10795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.693231   10795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:38.693483   10795 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:38.693678   10795 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:38.698545   10795 out.go:177] 
	W0701 04:55:38.702511   10795 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0701 04:55:38.702516   10795 out.go:239] * 
	* 
	W0701 04:55:38.704601   10795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:55:38.708393   10795 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-066000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (29.97075ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:38.741727   10797 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:38.741893   10797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.741896   10797 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:38.741898   10797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.742031   10797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:38.742159   10797 out.go:298] Setting JSON to false
	I0701 04:55:38.742170   10797 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:38.742239   10797 notify.go:220] Checking for updates...
	I0701 04:55:38.742387   10797 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:38.742394   10797 status.go:255] checking status of ha-066000 ...
	I0701 04:55:38.742633   10797 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:38.742637   10797 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:38.742639   10797 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (29.573917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-066000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.391209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 node start m02 -v=7 --alsologtostderr: exit status 85 (45.207333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:38.877971   10806 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:38.878370   10806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.878374   10806 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:38.878376   10806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.878544   10806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:38.878766   10806 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:38.878947   10806 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:38.880563   10806 out.go:177] 
	W0701 04:55:38.884497   10806 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0701 04:55:38.884502   10806 out.go:239] * 
	* 
	W0701 04:55:38.886540   10806 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:55:38.890484   10806 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0701 04:55:38.877971   10806 out.go:291] Setting OutFile to fd 1 ...
I0701 04:55:38.878370   10806 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:55:38.878374   10806 out.go:304] Setting ErrFile to fd 2...
I0701 04:55:38.878376   10806 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:55:38.878544   10806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:55:38.878766   10806 mustload.go:65] Loading cluster: ha-066000
I0701 04:55:38.878947   10806 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:55:38.880563   10806 out.go:177] 
W0701 04:55:38.884497   10806 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0701 04:55:38.884502   10806 out.go:239] * 
* 
W0701 04:55:38.886540   10806 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 04:55:38.890484   10806 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-066000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (30.614833ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:38.924370   10808 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:38.924519   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.924522   10808 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:38.924524   10808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:38.924658   10808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:38.924771   10808 out.go:298] Setting JSON to false
	I0701 04:55:38.924782   10808 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:38.924847   10808 notify.go:220] Checking for updates...
	I0701 04:55:38.924977   10808 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:38.924983   10808 status.go:255] checking status of ha-066000 ...
	I0701 04:55:38.925209   10808 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:38.925214   10808 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:38.925216   10808 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (73.541417ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:40.435828   10810 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:40.436023   10810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:40.436028   10810 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:40.436031   10810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:40.436222   10810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:40.436388   10810 out.go:298] Setting JSON to false
	I0701 04:55:40.436403   10810 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:40.436437   10810 notify.go:220] Checking for updates...
	I0701 04:55:40.436670   10810 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:40.436678   10810 status.go:255] checking status of ha-066000 ...
	I0701 04:55:40.436939   10810 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:40.436944   10810 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:40.436947   10810 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (74.228667ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:41.795943   10812 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:41.796179   10812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:41.796183   10812 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:41.796187   10812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:41.796363   10812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:41.796511   10812 out.go:298] Setting JSON to false
	I0701 04:55:41.796526   10812 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:41.796565   10812 notify.go:220] Checking for updates...
	I0701 04:55:41.796769   10812 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:41.796778   10812 status.go:255] checking status of ha-066000 ...
	I0701 04:55:41.797053   10812 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:41.797058   10812 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:41.797061   10812 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (73.295958ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:43.395865   10814 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:43.396064   10814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:43.396068   10814 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:43.396072   10814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:43.396246   10814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:43.396402   10814 out.go:298] Setting JSON to false
	I0701 04:55:43.396417   10814 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:43.396452   10814 notify.go:220] Checking for updates...
	I0701 04:55:43.396691   10814 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:43.396700   10814 status.go:255] checking status of ha-066000 ...
	I0701 04:55:43.396983   10814 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:43.396988   10814 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:43.396991   10814 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (73.187625ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:45.295210   10818 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:45.295389   10818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:45.295394   10818 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:45.295397   10818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:45.295576   10818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:45.295733   10818 out.go:298] Setting JSON to false
	I0701 04:55:45.295748   10818 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:45.295787   10818 notify.go:220] Checking for updates...
	I0701 04:55:45.295998   10818 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:45.296006   10818 status.go:255] checking status of ha-066000 ...
	I0701 04:55:45.296292   10818 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:45.296297   10818 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:45.296300   10818 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (74.218042ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:48.402402   10820 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:48.402590   10820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:48.402594   10820 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:48.402598   10820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:48.402778   10820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:48.402952   10820 out.go:298] Setting JSON to false
	I0701 04:55:48.402966   10820 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:48.403003   10820 notify.go:220] Checking for updates...
	I0701 04:55:48.403226   10820 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:48.403236   10820 status.go:255] checking status of ha-066000 ...
	I0701 04:55:48.403511   10820 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:48.403516   10820 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:48.403519   10820 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (72.374708ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:55:55.154734   10822 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:55:55.154944   10822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:55.154949   10822 out.go:304] Setting ErrFile to fd 2...
	I0701 04:55:55.154952   10822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:55:55.155161   10822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:55:55.155317   10822 out.go:298] Setting JSON to false
	I0701 04:55:55.155333   10822 mustload.go:65] Loading cluster: ha-066000
	I0701 04:55:55.155365   10822 notify.go:220] Checking for updates...
	I0701 04:55:55.155608   10822 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:55:55.155617   10822 status.go:255] checking status of ha-066000 ...
	I0701 04:55:55.155930   10822 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:55:55.155935   10822 status.go:343] host is not running, skipping remaining checks
	I0701 04:55:55.155938   10822 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (74.757916ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:07.076446   10828 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:07.076659   10828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:07.076664   10828 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:07.076668   10828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:07.076857   10828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:07.077039   10828 out.go:298] Setting JSON to false
	I0701 04:56:07.077058   10828 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:07.077094   10828 notify.go:220] Checking for updates...
	I0701 04:56:07.077321   10828 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:07.077330   10828 status.go:255] checking status of ha-066000 ...
	I0701 04:56:07.077624   10828 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:56:07.077630   10828 status.go:343] host is not running, skipping remaining checks
	I0701 04:56:07.077633   10828 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (73.913708ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:23.873683   10832 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:23.873886   10832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:23.873891   10832 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:23.873894   10832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:23.874067   10832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:23.874251   10832 out.go:298] Setting JSON to false
	I0701 04:56:23.874267   10832 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:23.874308   10832 notify.go:220] Checking for updates...
	I0701 04:56:23.874546   10832 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:23.874555   10832 status.go:255] checking status of ha-066000 ...
	I0701 04:56:23.874851   10832 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:56:23.874857   10832 status.go:343] host is not running, skipping remaining checks
	I0701 04:56:23.874859   10832 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (73.1125ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:37.245502   10838 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:37.245713   10838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:37.245717   10838 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:37.245721   10838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:37.245906   10838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:37.246065   10838 out.go:298] Setting JSON to false
	I0701 04:56:37.246087   10838 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:37.246120   10838 notify.go:220] Checking for updates...
	I0701 04:56:37.246345   10838 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:37.246354   10838 status.go:255] checking status of ha-066000 ...
	I0701 04:56:37.246632   10838 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:56:37.246637   10838 status.go:343] host is not running, skipping remaining checks
	I0701 04:56:37.246640   10838 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (34.03575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-066000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-066000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.153958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-066000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-066000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-066000 -v=7 --alsologtostderr: (3.253989125s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-066000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-066000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229383625s)

                                                
                                                
-- stdout --
	* [ha-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-066000" primary control-plane node in "ha-066000" cluster
	* Restarting existing qemu2 VM for "ha-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:40.706651   10869 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:40.706808   10869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:40.706812   10869 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:40.706815   10869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:40.706979   10869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:40.708237   10869 out.go:298] Setting JSON to false
	I0701 04:56:40.727804   10869 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6969,"bootTime":1719828031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:56:40.727887   10869 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:56:40.733262   10869 out.go:177] * [ha-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:56:40.741182   10869 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:56:40.741227   10869 notify.go:220] Checking for updates...
	I0701 04:56:40.748189   10869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:56:40.751115   10869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:56:40.754171   10869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:56:40.757238   10869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:56:40.762835   10869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:56:40.766418   10869 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:40.766484   10869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:56:40.771167   10869 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 04:56:40.778160   10869 start.go:297] selected driver: qemu2
	I0701 04:56:40.778168   10869 start.go:901] validating driver "qemu2" against &{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:56:40.778262   10869 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:56:40.780927   10869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 04:56:40.780973   10869 cni.go:84] Creating CNI manager for ""
	I0701 04:56:40.780979   10869 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0701 04:56:40.781039   10869 start.go:340] cluster config:
	{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:56:40.784951   10869 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:56:40.792135   10869 out.go:177] * Starting "ha-066000" primary control-plane node in "ha-066000" cluster
	I0701 04:56:40.796135   10869 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:56:40.796148   10869 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:56:40.796156   10869 cache.go:56] Caching tarball of preloaded images
	I0701 04:56:40.796222   10869 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:56:40.796228   10869 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:56:40.796280   10869 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/ha-066000/config.json ...
	I0701 04:56:40.796737   10869 start.go:360] acquireMachinesLock for ha-066000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:56:40.796773   10869 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "ha-066000"
	I0701 04:56:40.796784   10869 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:56:40.796789   10869 fix.go:54] fixHost starting: 
	I0701 04:56:40.796914   10869 fix.go:112] recreateIfNeeded on ha-066000: state=Stopped err=<nil>
	W0701 04:56:40.796927   10869 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:56:40.805201   10869 out.go:177] * Restarting existing qemu2 VM for "ha-066000" ...
	I0701 04:56:40.809211   10869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0d:2f:af:33:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:56:40.811348   10869 main.go:141] libmachine: STDOUT: 
	I0701 04:56:40.811369   10869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:56:40.811401   10869 fix.go:56] duration metric: took 14.612542ms for fixHost
	I0701 04:56:40.811406   10869 start.go:83] releasing machines lock for "ha-066000", held for 14.628541ms
	W0701 04:56:40.811413   10869 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:56:40.811445   10869 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:56:40.811451   10869 start.go:728] Will try again in 5 seconds ...
	I0701 04:56:45.813315   10869 start.go:360] acquireMachinesLock for ha-066000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:56:45.813726   10869 start.go:364] duration metric: took 314.959µs to acquireMachinesLock for "ha-066000"
	I0701 04:56:45.813842   10869 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:56:45.813863   10869 fix.go:54] fixHost starting: 
	I0701 04:56:45.814561   10869 fix.go:112] recreateIfNeeded on ha-066000: state=Stopped err=<nil>
	W0701 04:56:45.814592   10869 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:56:45.822907   10869 out.go:177] * Restarting existing qemu2 VM for "ha-066000" ...
	I0701 04:56:45.827133   10869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0d:2f:af:33:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:56:45.836142   10869 main.go:141] libmachine: STDOUT: 
	I0701 04:56:45.836210   10869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:56:45.836281   10869 fix.go:56] duration metric: took 22.42025ms for fixHost
	I0701 04:56:45.836299   10869 start.go:83] releasing machines lock for "ha-066000", held for 22.544917ms
	W0701 04:56:45.836464   10869 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:56:45.844905   10869 out.go:177] 
	W0701 04:56:45.849008   10869 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:56:45.849057   10869 out.go:239] * 
	* 
	W0701 04:56:45.851503   10869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:56:45.859813   10869 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-066000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-066000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (33.760292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.29575ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-066000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-066000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:46.004897   10881 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:46.005321   10881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:46.005325   10881 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:46.005328   10881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:46.005505   10881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:46.005723   10881 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:46.005919   10881 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:46.010549   10881 out.go:177] * The control-plane node ha-066000 host is not running: state=Stopped
	I0701 04:56:46.014602   10881 out.go:177]   To start a cluster, run: "minikube start -p ha-066000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-066000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (29.843792ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:46.047370   10883 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:46.047543   10883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:46.047547   10883 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:46.047549   10883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:46.047680   10883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:46.047820   10883 out.go:298] Setting JSON to false
	I0701 04:56:46.047832   10883 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:46.047908   10883 notify.go:220] Checking for updates...
	I0701 04:56:46.048049   10883 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:46.048055   10883 status.go:255] checking status of ha-066000 ...
	I0701 04:56:46.048258   10883 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:56:46.048262   10883 status.go:343] host is not running, skipping remaining checks
	I0701 04:56:46.048266   10883 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.128917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-066000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.064459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-066000 stop -v=7 --alsologtostderr: (2.123265167s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr: exit status 7 (68.141375ms)

                                                
                                                
-- stdout --
	ha-066000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:48.345215   10904 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:48.345413   10904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:48.345418   10904 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:48.345421   10904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:48.345599   10904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:48.345764   10904 out.go:298] Setting JSON to false
	I0701 04:56:48.345783   10904 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:48.345824   10904 notify.go:220] Checking for updates...
	I0701 04:56:48.346047   10904 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:48.346055   10904 status.go:255] checking status of ha-066000 ...
	I0701 04:56:48.346318   10904 status.go:330] ha-066000 host status = "Stopped" (err=<nil>)
	I0701 04:56:48.346323   10904 status.go:343] host is not running, skipping remaining checks
	I0701 04:56:48.346326   10904 status.go:257] ha-066000 status: &{Name:ha-066000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-066000 status -v=7 --alsologtostderr": ha-066000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (32.856459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-066000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-066000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.184114917s)

                                                
                                                
-- stdout --
	* [ha-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-066000" primary control-plane node in "ha-066000" cluster
	* Restarting existing qemu2 VM for "ha-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:48.407793   10908 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:48.407937   10908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:48.407940   10908 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:48.407942   10908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:48.408089   10908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:48.409124   10908 out.go:298] Setting JSON to false
	I0701 04:56:48.425032   10908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6977,"bootTime":1719828031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:56:48.425097   10908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:56:48.430298   10908 out.go:177] * [ha-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:56:48.437206   10908 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:56:48.437298   10908 notify.go:220] Checking for updates...
	I0701 04:56:48.444275   10908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:56:48.447196   10908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:56:48.450239   10908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:56:48.453306   10908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:56:48.456185   10908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:56:48.459452   10908 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:48.459714   10908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:56:48.464208   10908 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 04:56:48.471155   10908 start.go:297] selected driver: qemu2
	I0701 04:56:48.471161   10908 start.go:901] validating driver "qemu2" against &{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:56:48.471206   10908 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:56:48.473338   10908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 04:56:48.473360   10908 cni.go:84] Creating CNI manager for ""
	I0701 04:56:48.473364   10908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0701 04:56:48.473415   10908 start.go:340] cluster config:
	{Name:ha-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-066000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:56:48.476714   10908 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:56:48.484129   10908 out.go:177] * Starting "ha-066000" primary control-plane node in "ha-066000" cluster
	I0701 04:56:48.488195   10908 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:56:48.488209   10908 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:56:48.488217   10908 cache.go:56] Caching tarball of preloaded images
	I0701 04:56:48.488268   10908 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:56:48.488273   10908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:56:48.488333   10908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/ha-066000/config.json ...
	I0701 04:56:48.488755   10908 start.go:360] acquireMachinesLock for ha-066000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:56:48.488780   10908 start.go:364] duration metric: took 19.458µs to acquireMachinesLock for "ha-066000"
	I0701 04:56:48.488789   10908 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:56:48.488793   10908 fix.go:54] fixHost starting: 
	I0701 04:56:48.488899   10908 fix.go:112] recreateIfNeeded on ha-066000: state=Stopped err=<nil>
	W0701 04:56:48.488907   10908 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:56:48.493211   10908 out.go:177] * Restarting existing qemu2 VM for "ha-066000" ...
	I0701 04:56:48.501208   10908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0d:2f:af:33:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:56:48.502998   10908 main.go:141] libmachine: STDOUT: 
	I0701 04:56:48.503015   10908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:56:48.503042   10908 fix.go:56] duration metric: took 14.248708ms for fixHost
	I0701 04:56:48.503046   10908 start.go:83] releasing machines lock for "ha-066000", held for 14.262334ms
	W0701 04:56:48.503052   10908 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:56:48.503083   10908 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:56:48.503088   10908 start.go:728] Will try again in 5 seconds ...
	I0701 04:56:53.505138   10908 start.go:360] acquireMachinesLock for ha-066000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:56:53.505629   10908 start.go:364] duration metric: took 342.833µs to acquireMachinesLock for "ha-066000"
	I0701 04:56:53.505734   10908 start.go:96] Skipping create...Using existing machine configuration
	I0701 04:56:53.505754   10908 fix.go:54] fixHost starting: 
	I0701 04:56:53.506503   10908 fix.go:112] recreateIfNeeded on ha-066000: state=Stopped err=<nil>
	W0701 04:56:53.506534   10908 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 04:56:53.511124   10908 out.go:177] * Restarting existing qemu2 VM for "ha-066000" ...
	I0701 04:56:53.515336   10908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0d:2f:af:33:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/ha-066000/disk.qcow2
	I0701 04:56:53.524682   10908 main.go:141] libmachine: STDOUT: 
	I0701 04:56:53.524758   10908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:56:53.524867   10908 fix.go:56] duration metric: took 19.116333ms for fixHost
	I0701 04:56:53.524887   10908 start.go:83] releasing machines lock for "ha-066000", held for 19.2365ms
	W0701 04:56:53.525066   10908 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:56:53.533894   10908 out.go:177] 
	W0701 04:56:53.539155   10908 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:56:53.539184   10908 out.go:239] * 
	* 
	W0701 04:56:53.541674   10908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:56:53.551098   10908 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-066000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (69.200292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-066000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (29.803375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-066000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-066000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.831334ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-066000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-066000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:56:53.743729   10928 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:56:53.743902   10928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:53.743905   10928 out.go:304] Setting ErrFile to fd 2...
	I0701 04:56:53.743908   10928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:56:53.744031   10928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:56:53.744292   10928 mustload.go:65] Loading cluster: ha-066000
	I0701 04:56:53.744470   10928 config.go:182] Loaded profile config "ha-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:56:53.748992   10928 out.go:177] * The control-plane node ha-066000 host is not running: state=Stopped
	I0701 04:56:53.752901   10928 out.go:177]   To start a cluster, run: "minikube start -p ha-066000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-066000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (30.02925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-066000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-066000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-066000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-066000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-066000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-066000 -n ha-066000: exit status 7 (29.500667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-696000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-696000 --driver=qemu2 : exit status 80 (9.805713625s)

                                                
                                                
-- stdout --
	* [image-696000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-696000" primary control-plane node in "image-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-696000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-696000 -n image-696000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-696000 -n image-696000: exit status 7 (69.73875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-696000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-571000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-571000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.834976125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3520df04-c6ee-4223-be6e-a75f4dd35801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-571000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd5da48e-0deb-4af9-a978-dd9e29fd8232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19166"}}
	{"specversion":"1.0","id":"605bd196-e221-4233-91f6-0b31846e7e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig"}}
	{"specversion":"1.0","id":"d7ba4df8-a13d-4494-ac46-24dcc26df603","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"270c8893-b438-466a-8910-f6e91af1908d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f355708a-b789-4112-8531-15faa0909a71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube"}}
	{"specversion":"1.0","id":"9f224b99-6a57-453d-9e29-71317c89943c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d7a8c441-a9fc-4569-a30f-3a8a439736f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ead75e5f-fa0f-4ae3-8b9a-569355656282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c27b0879-5df7-4d8f-bda8-fce5c433b69d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-571000\" primary control-plane node in \"json-output-571000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ae44b1c-9877-42ba-86cd-435b37b68b61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"473a0b98-58c7-44bc-b2a1-771b524b813b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-571000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"da717470-1b7b-4563-87cd-e9725805860a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"dc0ffbcb-b4a1-44e9-8a00-709dbb034e22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2f20289e-7eb5-462a-bea5-da7e0235a6f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-571000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"8743b73b-ebee-4bca-8412-1664aa589ec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"5a8950dd-7610-47e0-9618-89ece2f3f98f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-571000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-571000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-571000 --output=json --user=testUser: exit status 83 (82.50825ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5b6ab41-6167-44e0-85d5-e271331dda79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-571000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"3985a4b2-4267-4ff3-b1f9-34ece4061396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-571000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-571000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-571000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-571000 --output=json --user=testUser: exit status 83 (44.242958ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-571000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-571000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-571000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-571000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-512000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-512000 --driver=qemu2 : exit status 80 (9.718063375s)

                                                
                                                
-- stdout --
	* [first-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-512000" primary control-plane node in "first-512000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-512000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-01 04:57:27.81414 -0700 PDT m=+468.391393126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-514000 -n second-514000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-514000 -n second-514000: exit status 85 (81.005292ms)

                                                
                                                
-- stdout --
	* Profile "second-514000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-514000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-514000" host is not running, skipping log retrieval (state="* Profile \"second-514000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-514000\"")
helpers_test.go:175: Cleaning up "second-514000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-514000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-01 04:57:28.004226 -0700 PDT m=+468.581479543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-512000 -n first-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-512000 -n first-512000: exit status 7 (30.542875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-512000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-512000
--- FAIL: TestMinikubeProfile (10.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-175000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-175000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.940557334s)

                                                
                                                
-- stdout --
	* [mount-start-1-175000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-175000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-175000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-175000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-175000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-175000 -n mount-start-1-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-175000 -n mount-start-1-175000: exit status 7 (68.54275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-175000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-037000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-037000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.938200208s)

                                                
                                                
-- stdout --
	* [multinode-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-037000" primary control-plane node in "multinode-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:57:38.323730   11077 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:57:38.323873   11077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:57:38.323877   11077 out.go:304] Setting ErrFile to fd 2...
	I0701 04:57:38.323879   11077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:57:38.324010   11077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:57:38.325065   11077 out.go:298] Setting JSON to false
	I0701 04:57:38.341401   11077 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7027,"bootTime":1719828031,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:57:38.341500   11077 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:57:38.345121   11077 out.go:177] * [multinode-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:57:38.352160   11077 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:57:38.352203   11077 notify.go:220] Checking for updates...
	I0701 04:57:38.358115   11077 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:57:38.361147   11077 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:57:38.362455   11077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:57:38.365097   11077 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:57:38.368126   11077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:57:38.371388   11077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:57:38.376073   11077 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 04:57:38.383125   11077 start.go:297] selected driver: qemu2
	I0701 04:57:38.383132   11077 start.go:901] validating driver "qemu2" against <nil>
	I0701 04:57:38.383139   11077 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:57:38.385272   11077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 04:57:38.388170   11077 out.go:177] * Automatically selected the socket_vmnet network
	I0701 04:57:38.391171   11077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 04:57:38.391203   11077 cni.go:84] Creating CNI manager for ""
	I0701 04:57:38.391208   11077 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0701 04:57:38.391211   11077 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 04:57:38.391238   11077 start.go:340] cluster config:
	{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:57:38.395155   11077 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:57:38.402127   11077 out.go:177] * Starting "multinode-037000" primary control-plane node in "multinode-037000" cluster
	I0701 04:57:38.406068   11077 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:57:38.406085   11077 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:57:38.406096   11077 cache.go:56] Caching tarball of preloaded images
	I0701 04:57:38.406162   11077 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 04:57:38.406168   11077 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:57:38.406381   11077 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/multinode-037000/config.json ...
	I0701 04:57:38.406394   11077 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/multinode-037000/config.json: {Name:mk5c3b60dc6f1191ccceef205fad112cc2b9f0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 04:57:38.406740   11077 start.go:360] acquireMachinesLock for multinode-037000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:57:38.406777   11077 start.go:364] duration metric: took 28.917µs to acquireMachinesLock for "multinode-037000"
	I0701 04:57:38.406790   11077 start.go:93] Provisioning new machine with config: &{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 04:57:38.406823   11077 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 04:57:38.416142   11077 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 04:57:38.434162   11077 start.go:159] libmachine.API.Create for "multinode-037000" (driver="qemu2")
	I0701 04:57:38.434185   11077 client.go:168] LocalClient.Create starting
	I0701 04:57:38.434255   11077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 04:57:38.434285   11077 main.go:141] libmachine: Decoding PEM data...
	I0701 04:57:38.434294   11077 main.go:141] libmachine: Parsing certificate...
	I0701 04:57:38.434330   11077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 04:57:38.434356   11077 main.go:141] libmachine: Decoding PEM data...
	I0701 04:57:38.434370   11077 main.go:141] libmachine: Parsing certificate...
	I0701 04:57:38.434811   11077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 04:57:38.565534   11077 main.go:141] libmachine: Creating SSH key...
	I0701 04:57:38.775789   11077 main.go:141] libmachine: Creating Disk image...
	I0701 04:57:38.775796   11077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 04:57:38.776001   11077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 04:57:38.785689   11077 main.go:141] libmachine: STDOUT: 
	I0701 04:57:38.785710   11077 main.go:141] libmachine: STDERR: 
	I0701 04:57:38.785762   11077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2 +20000M
	I0701 04:57:38.793805   11077 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 04:57:38.793818   11077 main.go:141] libmachine: STDERR: 
	I0701 04:57:38.793832   11077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 04:57:38.793837   11077 main.go:141] libmachine: Starting QEMU VM...
	I0701 04:57:38.793867   11077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ad:9f:ab:c3:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 04:57:38.795466   11077 main.go:141] libmachine: STDOUT: 
	I0701 04:57:38.795481   11077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:57:38.795500   11077 client.go:171] duration metric: took 361.312709ms to LocalClient.Create
	I0701 04:57:40.797689   11077 start.go:128] duration metric: took 2.390855209s to createHost
	I0701 04:57:40.797755   11077 start.go:83] releasing machines lock for "multinode-037000", held for 2.390968417s
	W0701 04:57:40.797846   11077 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:57:40.809182   11077 out.go:177] * Deleting "multinode-037000" in qemu2 ...
	W0701 04:57:40.833777   11077 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:57:40.833805   11077 start.go:728] Will try again in 5 seconds ...
	I0701 04:57:45.836095   11077 start.go:360] acquireMachinesLock for multinode-037000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 04:57:45.836570   11077 start.go:364] duration metric: took 342.708µs to acquireMachinesLock for "multinode-037000"
	I0701 04:57:45.836713   11077 start.go:93] Provisioning new machine with config: &{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 04:57:45.836990   11077 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 04:57:45.849704   11077 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 04:57:45.901625   11077 start.go:159] libmachine.API.Create for "multinode-037000" (driver="qemu2")
	I0701 04:57:45.901669   11077 client.go:168] LocalClient.Create starting
	I0701 04:57:45.901788   11077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 04:57:45.901856   11077 main.go:141] libmachine: Decoding PEM data...
	I0701 04:57:45.901874   11077 main.go:141] libmachine: Parsing certificate...
	I0701 04:57:45.901936   11077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 04:57:45.901980   11077 main.go:141] libmachine: Decoding PEM data...
	I0701 04:57:45.901991   11077 main.go:141] libmachine: Parsing certificate...
	I0701 04:57:45.902858   11077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 04:57:46.042809   11077 main.go:141] libmachine: Creating SSH key...
	I0701 04:57:46.166693   11077 main.go:141] libmachine: Creating Disk image...
	I0701 04:57:46.166699   11077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 04:57:46.166867   11077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 04:57:46.175866   11077 main.go:141] libmachine: STDOUT: 
	I0701 04:57:46.175882   11077 main.go:141] libmachine: STDERR: 
	I0701 04:57:46.175929   11077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2 +20000M
	I0701 04:57:46.183995   11077 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 04:57:46.184007   11077 main.go:141] libmachine: STDERR: 
	I0701 04:57:46.184021   11077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 04:57:46.184024   11077 main.go:141] libmachine: Starting QEMU VM...
	I0701 04:57:46.184058   11077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:48:7c:fa:9f:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 04:57:46.185693   11077 main.go:141] libmachine: STDOUT: 
	I0701 04:57:46.185748   11077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 04:57:46.185760   11077 client.go:171] duration metric: took 284.087375ms to LocalClient.Create
	I0701 04:57:48.187923   11077 start.go:128] duration metric: took 2.350916s to createHost
	I0701 04:57:48.188005   11077 start.go:83] releasing machines lock for "multinode-037000", held for 2.351423208s
	W0701 04:57:48.188386   11077 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 04:57:48.201743   11077 out.go:177] 
	W0701 04:57:48.206169   11077 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 04:57:48.206199   11077 out.go:239] * 
	* 
	W0701 04:57:48.208249   11077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:57:48.221721   11077 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-037000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (61.873459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (58.867625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-037000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- rollout status deployment/busybox: exit status 1 (56.828166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.208166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.957833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.878583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.231417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.744459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.404875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.447834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.896167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.856541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.768666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.833292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.248875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.350125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.431458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.667708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.480667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-037000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.9645ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (31.017666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-037000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-037000 -v 3 --alsologtostderr: exit status 83 (46.415916ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-037000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-037000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:45.254592   11188 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:45.255163   11188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.255167   11188 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:45.255169   11188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.255300   11188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:45.255523   11188 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:45.255726   11188 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:45.263030   11188 out.go:177] * The control-plane node multinode-037000 host is not running: state=Stopped
	I0701 04:59:45.267143   11188 out.go:177]   To start a cluster, run: "minikube start -p multinode-037000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-037000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.430791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-037000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-037000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.817792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-037000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-037000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-037000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.102125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-037000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-037000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-037000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-037000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.755084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status --output json --alsologtostderr: exit status 7 (30.438917ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-037000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:45.465550   11200 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:45.465693   11200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.465696   11200 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:45.465699   11200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.465824   11200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:45.465938   11200 out.go:298] Setting JSON to true
	I0701 04:59:45.465953   11200 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:45.466014   11200 notify.go:220] Checking for updates...
	I0701 04:59:45.466145   11200 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:45.466152   11200 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:45.466355   11200 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:45.466358   11200 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:45.466361   11200 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-037000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.460167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 node stop m03: exit status 85 (50.346459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-037000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status: exit status 7 (29.959084ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr: exit status 7 (29.734375ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:45.606909   11208 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:45.607057   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.607060   11208 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:45.607062   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.607193   11208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:45.607325   11208 out.go:298] Setting JSON to false
	I0701 04:59:45.607339   11208 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:45.607392   11208 notify.go:220] Checking for updates...
	I0701 04:59:45.607531   11208 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:45.607539   11208 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:45.607744   11208 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:45.607748   11208 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:45.607750   11208 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr": multinode-037000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.552375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.257208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:45.667487   11212 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:45.667858   11212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.667864   11212 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:45.667866   11212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.668079   11212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:45.668346   11212 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:45.668544   11212 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:45.672879   11212 out.go:177] 
	W0701 04:59:45.676867   11212 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0701 04:59:45.676872   11212 out.go:239] * 
	* 
	W0701 04:59:45.678835   11212 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:59:45.682897   11212 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0701 04:59:45.667487   11212 out.go:291] Setting OutFile to fd 1 ...
I0701 04:59:45.667858   11212 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:59:45.667864   11212 out.go:304] Setting ErrFile to fd 2...
I0701 04:59:45.667866   11212 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 04:59:45.668079   11212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
I0701 04:59:45.668346   11212 mustload.go:65] Loading cluster: multinode-037000
I0701 04:59:45.668544   11212 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 04:59:45.672879   11212 out.go:177] 
W0701 04:59:45.676867   11212 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0701 04:59:45.676872   11212 out.go:239] * 
* 
W0701 04:59:45.678835   11212 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 04:59:45.682897   11212 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-037000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (29.837375ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:45.716064   11214 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:45.716210   11214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.716213   11214 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:45.716215   11214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:45.716344   11214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:45.716463   11214 out.go:298] Setting JSON to false
	I0701 04:59:45.716478   11214 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:45.716543   11214 notify.go:220] Checking for updates...
	I0701 04:59:45.716672   11214 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:45.716679   11214 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:45.716925   11214 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:45.716929   11214 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:45.716931   11214 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (73.786125ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:46.852325   11216 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:46.852522   11216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:46.852527   11216 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:46.852530   11216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:46.852709   11216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:46.852862   11216 out.go:298] Setting JSON to false
	I0701 04:59:46.852879   11216 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:46.852923   11216 notify.go:220] Checking for updates...
	I0701 04:59:46.853140   11216 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:46.853147   11216 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:46.853437   11216 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:46.853443   11216 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:46.853445   11216 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (73.643459ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:47.684693   11218 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:47.684898   11218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:47.684903   11218 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:47.684906   11218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:47.685120   11218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:47.685281   11218 out.go:298] Setting JSON to false
	I0701 04:59:47.685295   11218 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:47.685334   11218 notify.go:220] Checking for updates...
	I0701 04:59:47.685576   11218 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:47.685584   11218 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:47.685874   11218 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:47.685879   11218 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:47.685882   11218 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (73.744958ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:50.607295   11220 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:50.607526   11220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:50.607531   11220 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:50.607534   11220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:50.607694   11220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:50.607855   11220 out.go:298] Setting JSON to false
	I0701 04:59:50.607873   11220 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:50.607921   11220 notify.go:220] Checking for updates...
	I0701 04:59:50.608133   11220 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:50.608141   11220 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:50.608396   11220 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:50.608401   11220 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:50.608404   11220 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (73.583125ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:55.265645   11224 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:55.265825   11224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:55.265830   11224 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:55.265832   11224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:55.265981   11224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:55.266130   11224 out.go:298] Setting JSON to false
	I0701 04:59:55.266145   11224 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:55.266183   11224 notify.go:220] Checking for updates...
	I0701 04:59:55.266412   11224 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:55.266421   11224 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:55.266682   11224 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:55.266687   11224 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:55.266690   11224 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (72.367709ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:59:59.027062   11226 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:59:59.027272   11226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:59.027277   11226 out.go:304] Setting ErrFile to fd 2...
	I0701 04:59:59.027280   11226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:59:59.027502   11226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:59:59.027653   11226 out.go:298] Setting JSON to false
	I0701 04:59:59.027674   11226 mustload.go:65] Loading cluster: multinode-037000
	I0701 04:59:59.027709   11226 notify.go:220] Checking for updates...
	I0701 04:59:59.027927   11226 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:59:59.027937   11226 status.go:255] checking status of multinode-037000 ...
	I0701 04:59:59.028207   11226 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 04:59:59.028212   11226 status.go:343] host is not running, skipping remaining checks
	I0701 04:59:59.028215   11226 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (74.647875ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:08.142183   11245 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:08.142427   11245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:08.142431   11245 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:08.142435   11245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:08.142610   11245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:08.142775   11245 out.go:298] Setting JSON to false
	I0701 05:00:08.142794   11245 mustload.go:65] Loading cluster: multinode-037000
	I0701 05:00:08.142821   11245 notify.go:220] Checking for updates...
	I0701 05:00:08.143038   11245 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:08.143047   11245 status.go:255] checking status of multinode-037000 ...
	I0701 05:00:08.143353   11245 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 05:00:08.143358   11245 status.go:343] host is not running, skipping remaining checks
	I0701 05:00:08.143361   11245 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (76.425709ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:14.812318   11251 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:14.812507   11251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:14.812512   11251 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:14.812516   11251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:14.812682   11251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:14.812839   11251 out.go:298] Setting JSON to false
	I0701 05:00:14.812856   11251 mustload.go:65] Loading cluster: multinode-037000
	I0701 05:00:14.812883   11251 notify.go:220] Checking for updates...
	I0701 05:00:14.813114   11251 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:14.813123   11251 status.go:255] checking status of multinode-037000 ...
	I0701 05:00:14.813414   11251 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 05:00:14.813419   11251 status.go:343] host is not running, skipping remaining checks
	I0701 05:00:14.813422   11251 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr: exit status 7 (73.22025ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:33.674153   11257 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:33.674355   11257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:33.674360   11257 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:33.674362   11257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:33.674553   11257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:33.674705   11257 out.go:298] Setting JSON to false
	I0701 05:00:33.674722   11257 mustload.go:65] Loading cluster: multinode-037000
	I0701 05:00:33.674758   11257 notify.go:220] Checking for updates...
	I0701 05:00:33.675005   11257 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:33.675018   11257 status.go:255] checking status of multinode-037000 ...
	I0701 05:00:33.675306   11257 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 05:00:33.675311   11257 status.go:343] host is not running, skipping remaining checks
	I0701 05:00:33.675314   11257 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-037000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (33.160083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-037000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-037000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-037000: (1.95774875s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-037000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-037000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225441875s)

                                                
                                                
-- stdout --
	* [multinode-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-037000" primary control-plane node in "multinode-037000" cluster
	* Restarting existing qemu2 VM for "multinode-037000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-037000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:35.762284   11275 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:35.762449   11275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:35.762454   11275 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:35.762457   11275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:35.762637   11275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:35.763852   11275 out.go:298] Setting JSON to false
	I0701 05:00:35.783082   11275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7204,"bootTime":1719828031,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:00:35.783160   11275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:00:35.787867   11275 out.go:177] * [multinode-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:00:35.794826   11275 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:00:35.794890   11275 notify.go:220] Checking for updates...
	I0701 05:00:35.800076   11275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:00:35.802727   11275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:00:35.805819   11275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:00:35.808794   11275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:00:35.811793   11275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:00:35.815121   11275 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:35.815191   11275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:00:35.819795   11275 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:00:35.826694   11275 start.go:297] selected driver: qemu2
	I0701 05:00:35.826699   11275 start.go:901] validating driver "qemu2" against &{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:00:35.826742   11275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:00:35.828995   11275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:00:35.829034   11275 cni.go:84] Creating CNI manager for ""
	I0701 05:00:35.829040   11275 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0701 05:00:35.829081   11275 start.go:340] cluster config:
	{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:00:35.832747   11275 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:00:35.841764   11275 out.go:177] * Starting "multinode-037000" primary control-plane node in "multinode-037000" cluster
	I0701 05:00:35.845793   11275 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:00:35.845809   11275 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:00:35.845820   11275 cache.go:56] Caching tarball of preloaded images
	I0701 05:00:35.845886   11275 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:00:35.845893   11275 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:00:35.845961   11275 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/multinode-037000/config.json ...
	I0701 05:00:35.846453   11275 start.go:360] acquireMachinesLock for multinode-037000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:00:35.846490   11275 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "multinode-037000"
	I0701 05:00:35.846501   11275 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:00:35.846508   11275 fix.go:54] fixHost starting: 
	I0701 05:00:35.846642   11275 fix.go:112] recreateIfNeeded on multinode-037000: state=Stopped err=<nil>
	W0701 05:00:35.846650   11275 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:00:35.854820   11275 out.go:177] * Restarting existing qemu2 VM for "multinode-037000" ...
	I0701 05:00:35.858794   11275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:48:7c:fa:9f:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 05:00:35.860905   11275 main.go:141] libmachine: STDOUT: 
	I0701 05:00:35.860932   11275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:00:35.860966   11275 fix.go:56] duration metric: took 14.45825ms for fixHost
	I0701 05:00:35.860972   11275 start.go:83] releasing machines lock for "multinode-037000", held for 14.476875ms
	W0701 05:00:35.860979   11275 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:00:35.861014   11275 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:00:35.861020   11275 start.go:728] Will try again in 5 seconds ...
	I0701 05:00:40.863167   11275 start.go:360] acquireMachinesLock for multinode-037000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:00:40.863624   11275 start.go:364] duration metric: took 334.666µs to acquireMachinesLock for "multinode-037000"
	I0701 05:00:40.863756   11275 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:00:40.863773   11275 fix.go:54] fixHost starting: 
	I0701 05:00:40.864513   11275 fix.go:112] recreateIfNeeded on multinode-037000: state=Stopped err=<nil>
	W0701 05:00:40.864538   11275 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:00:40.869051   11275 out.go:177] * Restarting existing qemu2 VM for "multinode-037000" ...
	I0701 05:00:40.878102   11275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:48:7c:fa:9f:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 05:00:40.887069   11275 main.go:141] libmachine: STDOUT: 
	I0701 05:00:40.887148   11275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:00:40.887203   11275 fix.go:56] duration metric: took 23.432ms for fixHost
	I0701 05:00:40.887222   11275 start.go:83] releasing machines lock for "multinode-037000", held for 23.576459ms
	W0701 05:00:40.887395   11275 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-037000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-037000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:00:40.895922   11275 out.go:177] 
	W0701 05:00:40.900033   11275 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:00:40.900065   11275 out.go:239] * 
	* 
	W0701 05:00:40.902712   11275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:00:40.909954   11275 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-037000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-037000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (33.789125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 node delete m03: exit status 83 (42.446125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-037000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-037000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-037000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr: exit status 7 (30.320333ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:41.097197   11289 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:41.097351   11289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:41.097354   11289 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:41.097356   11289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:41.097495   11289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:41.097618   11289 out.go:298] Setting JSON to false
	I0701 05:00:41.097633   11289 mustload.go:65] Loading cluster: multinode-037000
	I0701 05:00:41.097691   11289 notify.go:220] Checking for updates...
	I0701 05:00:41.097820   11289 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:41.097827   11289 status.go:255] checking status of multinode-037000 ...
	I0701 05:00:41.098027   11289 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 05:00:41.098031   11289 status.go:343] host is not running, skipping remaining checks
	I0701 05:00:41.098033   11289 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.639458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-037000 stop: (3.255281791s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status: exit status 7 (73.246625ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr: exit status 7 (33.934375ms)

                                                
                                                
-- stdout --
	multinode-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:44.490883   11313 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:44.491033   11313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:44.491037   11313 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:44.491039   11313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:44.491175   11313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:44.491291   11313 out.go:298] Setting JSON to false
	I0701 05:00:44.491302   11313 mustload.go:65] Loading cluster: multinode-037000
	I0701 05:00:44.491364   11313 notify.go:220] Checking for updates...
	I0701 05:00:44.491535   11313 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:44.491547   11313 status.go:255] checking status of multinode-037000 ...
	I0701 05:00:44.491750   11313 status.go:330] multinode-037000 host status = "Stopped" (err=<nil>)
	I0701 05:00:44.491754   11313 status.go:343] host is not running, skipping remaining checks
	I0701 05:00:44.491761   11313 status.go:257] multinode-037000 status: &{Name:multinode-037000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr": multinode-037000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-037000 status --alsologtostderr": multinode-037000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (30.604834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-037000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-037000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184583584s)

                                                
                                                
-- stdout --
	* [multinode-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-037000" primary control-plane node in "multinode-037000" cluster
	* Restarting existing qemu2 VM for "multinode-037000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-037000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:00:44.551514   11317 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:00:44.551648   11317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:44.551651   11317 out.go:304] Setting ErrFile to fd 2...
	I0701 05:00:44.551653   11317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:00:44.551797   11317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:00:44.552824   11317 out.go:298] Setting JSON to false
	I0701 05:00:44.568954   11317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7213,"bootTime":1719828031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:00:44.569027   11317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:00:44.572646   11317 out.go:177] * [multinode-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:00:44.579491   11317 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:00:44.579551   11317 notify.go:220] Checking for updates...
	I0701 05:00:44.586502   11317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:00:44.589528   11317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:00:44.592484   11317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:00:44.595498   11317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:00:44.598427   11317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:00:44.601817   11317 config.go:182] Loaded profile config "multinode-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:00:44.602097   11317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:00:44.606463   11317 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:00:44.613485   11317 start.go:297] selected driver: qemu2
	I0701 05:00:44.613490   11317 start.go:901] validating driver "qemu2" against &{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:00:44.613537   11317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:00:44.615901   11317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:00:44.615962   11317 cni.go:84] Creating CNI manager for ""
	I0701 05:00:44.615966   11317 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0701 05:00:44.616001   11317 start.go:340] cluster config:
	{Name:multinode-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-037000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:00:44.619458   11317 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:00:44.627434   11317 out.go:177] * Starting "multinode-037000" primary control-plane node in "multinode-037000" cluster
	I0701 05:00:44.631498   11317 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:00:44.631517   11317 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:00:44.631525   11317 cache.go:56] Caching tarball of preloaded images
	I0701 05:00:44.631583   11317 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:00:44.631589   11317 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:00:44.631646   11317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/multinode-037000/config.json ...
	I0701 05:00:44.632101   11317 start.go:360] acquireMachinesLock for multinode-037000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:00:44.632127   11317 start.go:364] duration metric: took 20.75µs to acquireMachinesLock for "multinode-037000"
	I0701 05:00:44.632137   11317 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:00:44.632142   11317 fix.go:54] fixHost starting: 
	I0701 05:00:44.632252   11317 fix.go:112] recreateIfNeeded on multinode-037000: state=Stopped err=<nil>
	W0701 05:00:44.632261   11317 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:00:44.640466   11317 out.go:177] * Restarting existing qemu2 VM for "multinode-037000" ...
	I0701 05:00:44.643505   11317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:48:7c:fa:9f:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 05:00:44.645456   11317 main.go:141] libmachine: STDOUT: 
	I0701 05:00:44.645474   11317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:00:44.645499   11317 fix.go:56] duration metric: took 13.358625ms for fixHost
	I0701 05:00:44.645502   11317 start.go:83] releasing machines lock for "multinode-037000", held for 13.371333ms
	W0701 05:00:44.645508   11317 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:00:44.645542   11317 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:00:44.645546   11317 start.go:728] Will try again in 5 seconds ...
	I0701 05:00:49.647821   11317 start.go:360] acquireMachinesLock for multinode-037000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:00:49.648368   11317 start.go:364] duration metric: took 447.166µs to acquireMachinesLock for "multinode-037000"
	I0701 05:00:49.648558   11317 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:00:49.648580   11317 fix.go:54] fixHost starting: 
	I0701 05:00:49.649342   11317 fix.go:112] recreateIfNeeded on multinode-037000: state=Stopped err=<nil>
	W0701 05:00:49.649371   11317 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:00:49.653944   11317 out.go:177] * Restarting existing qemu2 VM for "multinode-037000" ...
	I0701 05:00:49.661034   11317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:48:7c:fa:9f:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/multinode-037000/disk.qcow2
	I0701 05:00:49.670438   11317 main.go:141] libmachine: STDOUT: 
	I0701 05:00:49.670508   11317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:00:49.670628   11317 fix.go:56] duration metric: took 22.04925ms for fixHost
	I0701 05:00:49.670648   11317 start.go:83] releasing machines lock for "multinode-037000", held for 22.210917ms
	W0701 05:00:49.670819   11317 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-037000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-037000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:00:49.679717   11317 out.go:177] 
	W0701 05:00:49.683943   11317 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:00:49.683966   11317 out.go:239] * 
	* 
	W0701 05:00:49.686746   11317 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:00:49.694820   11317 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-037000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (69.129167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-037000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-037000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-037000-m01 --driver=qemu2 : exit status 80 (9.826916667s)

                                                
                                                
-- stdout --
	* [multinode-037000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-037000-m01" primary control-plane node in "multinode-037000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-037000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-037000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-037000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-037000-m02 --driver=qemu2 : exit status 80 (9.992373875s)

                                                
                                                
-- stdout --
	* [multinode-037000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-037000-m02" primary control-plane node in "multinode-037000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-037000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-037000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-037000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-037000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-037000: exit status 83 (80.793833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-037000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-037000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-037000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-037000 -n multinode-037000: exit status 7 (31.253416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                    
x
+
TestPreload (10.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-827000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-827000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.945537417s)

                                                
                                                
-- stdout --
	* [test-preload-827000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-827000" primary control-plane node in "test-preload-827000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-827000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:01:09.960288   11381 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:01:09.960442   11381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:09.960445   11381 out.go:304] Setting ErrFile to fd 2...
	I0701 05:01:09.960447   11381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:01:09.960576   11381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:01:09.961614   11381 out.go:298] Setting JSON to false
	I0701 05:01:09.977899   11381 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7238,"bootTime":1719828031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:01:09.977974   11381 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:01:09.983894   11381 out.go:177] * [test-preload-827000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:01:09.990867   11381 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:01:09.990912   11381 notify.go:220] Checking for updates...
	I0701 05:01:09.997793   11381 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:01:10.000835   11381 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:01:10.003906   11381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:01:10.006775   11381 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:01:10.009858   11381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:01:10.013251   11381 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:01:10.013304   11381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:01:10.017768   11381 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:01:10.024837   11381 start.go:297] selected driver: qemu2
	I0701 05:01:10.024844   11381 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:01:10.024852   11381 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:01:10.027209   11381 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:01:10.029791   11381 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:01:10.032799   11381 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:01:10.032827   11381 cni.go:84] Creating CNI manager for ""
	I0701 05:01:10.032835   11381 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:01:10.032840   11381 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:01:10.032873   11381 start.go:340] cluster config:
	{Name:test-preload-827000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:01:10.036525   11381 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.043837   11381 out.go:177] * Starting "test-preload-827000" primary control-plane node in "test-preload-827000" cluster
	I0701 05:01:10.047789   11381 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0701 05:01:10.047886   11381 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/test-preload-827000/config.json ...
	I0701 05:01:10.047903   11381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/test-preload-827000/config.json: {Name:mk428b735b79588ccbde4bd3781faff407baf86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:01:10.047901   11381 cache.go:107] acquiring lock: {Name:mkb28b7d830b0b18ece9878c83ddd303ab5bb3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.047922   11381 cache.go:107] acquiring lock: {Name:mk1cb1eddde051d5f573e338b11bbeefce621aa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.047928   11381 cache.go:107] acquiring lock: {Name:mk2f3c443141097f59696c29673bcfe384e02abb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.048040   11381 cache.go:107] acquiring lock: {Name:mk12b7203a624fc20edb9c7f8231e1a6045f3959 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.048086   11381 cache.go:107] acquiring lock: {Name:mk6154e54ac43d00860dd8529ade807ce2ee42ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.048055   11381 cache.go:107] acquiring lock: {Name:mk3f15eb7b7504d7af1694849abfdb3b7b599dc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.048143   11381 cache.go:107] acquiring lock: {Name:mk4026d8dddaae59be77db177ba569b4cc8aea8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.048213   11381 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0701 05:01:10.048214   11381 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0701 05:01:10.047903   11381 cache.go:107] acquiring lock: {Name:mk03bef6a581b512a684be91e67fd87a1facf587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:01:10.048282   11381 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:01:10.048386   11381 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0701 05:01:10.048443   11381 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0701 05:01:10.048448   11381 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:01:10.048456   11381 start.go:360] acquireMachinesLock for test-preload-827000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:01:10.048466   11381 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0701 05:01:10.048494   11381 start.go:364] duration metric: took 32.25µs to acquireMachinesLock for "test-preload-827000"
	I0701 05:01:10.048511   11381 start.go:93] Provisioning new machine with config: &{Name:test-preload-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:01:10.048548   11381 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:01:10.048607   11381 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:01:10.052795   11381 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:01:10.057683   11381 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0701 05:01:10.060892   11381 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0701 05:01:10.061327   11381 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:01:10.061344   11381 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0701 05:01:10.061408   11381 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0701 05:01:10.062248   11381 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0701 05:01:10.062286   11381 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:01:10.062390   11381 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:01:10.069862   11381 start.go:159] libmachine.API.Create for "test-preload-827000" (driver="qemu2")
	I0701 05:01:10.069892   11381 client.go:168] LocalClient.Create starting
	I0701 05:01:10.069976   11381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:01:10.070011   11381 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:10.070022   11381 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:10.070065   11381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:01:10.070088   11381 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:10.070096   11381 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:10.070514   11381 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:01:10.206250   11381 main.go:141] libmachine: Creating SSH key...
	I0701 05:01:10.426180   11381 main.go:141] libmachine: Creating Disk image...
	I0701 05:01:10.426197   11381 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:01:10.426374   11381 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2
	I0701 05:01:10.431251   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0701 05:01:10.434666   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0701 05:01:10.436411   11381 main.go:141] libmachine: STDOUT: 
	I0701 05:01:10.436420   11381 main.go:141] libmachine: STDERR: 
	I0701 05:01:10.436463   11381 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2 +20000M
	I0701 05:01:10.444713   11381 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:01:10.444726   11381 main.go:141] libmachine: STDERR: 
	I0701 05:01:10.444737   11381 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2
	I0701 05:01:10.444741   11381 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:01:10.444777   11381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:a8:41:aa:4c:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2
	I0701 05:01:10.445571   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0701 05:01:10.446635   11381 main.go:141] libmachine: STDOUT: 
	I0701 05:01:10.446644   11381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:01:10.446660   11381 client.go:171] duration metric: took 376.76475ms to LocalClient.Create
	I0701 05:01:10.454031   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0701 05:01:10.475544   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0701 05:01:10.501090   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0701 05:01:10.537654   11381 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0701 05:01:10.537679   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0701 05:01:10.764031   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0701 05:01:10.764096   11381 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 716.167375ms
	I0701 05:01:10.764149   11381 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0701 05:01:11.051351   11381 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0701 05:01:11.051471   11381 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 05:01:11.248722   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 05:01:11.248759   11381 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.200861833s
	I0701 05:01:11.248780   11381 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 05:01:12.028170   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0701 05:01:12.028221   11381 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.980156416s
	I0701 05:01:12.028252   11381 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0701 05:01:12.446874   11381 start.go:128] duration metric: took 2.398307958s to createHost
	I0701 05:01:12.446931   11381 start.go:83] releasing machines lock for "test-preload-827000", held for 2.398434708s
	W0701 05:01:12.447018   11381 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:12.455228   11381 out.go:177] * Deleting "test-preload-827000" in qemu2 ...
	W0701 05:01:12.478812   11381 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:12.478850   11381 start.go:728] Will try again in 5 seconds ...
	I0701 05:01:13.446298   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0701 05:01:13.446349   11381 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.398329166s
	I0701 05:01:13.446378   11381 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0701 05:01:14.599641   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0701 05:01:14.599688   11381 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.551802542s
	I0701 05:01:14.599738   11381 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0701 05:01:15.165883   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0701 05:01:15.165940   11381 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.118051s
	I0701 05:01:15.165966   11381 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0701 05:01:15.771197   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0701 05:01:15.771243   11381 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.723239667s
	I0701 05:01:15.771272   11381 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0701 05:01:17.479353   11381 start.go:360] acquireMachinesLock for test-preload-827000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:01:17.479779   11381 start.go:364] duration metric: took 349.416µs to acquireMachinesLock for "test-preload-827000"
	I0701 05:01:17.479924   11381 start.go:93] Provisioning new machine with config: &{Name:test-preload-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:01:17.480165   11381 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:01:17.491815   11381 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:01:17.542604   11381 start.go:159] libmachine.API.Create for "test-preload-827000" (driver="qemu2")
	I0701 05:01:17.542693   11381 client.go:168] LocalClient.Create starting
	I0701 05:01:17.542864   11381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:01:17.542932   11381 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:17.542950   11381 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:17.543022   11381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:01:17.543066   11381 main.go:141] libmachine: Decoding PEM data...
	I0701 05:01:17.543079   11381 main.go:141] libmachine: Parsing certificate...
	I0701 05:01:17.543584   11381 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:01:17.688132   11381 main.go:141] libmachine: Creating SSH key...
	I0701 05:01:17.812569   11381 main.go:141] libmachine: Creating Disk image...
	I0701 05:01:17.812576   11381 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:01:17.812736   11381 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2
	I0701 05:01:17.822110   11381 main.go:141] libmachine: STDOUT: 
	I0701 05:01:17.822127   11381 main.go:141] libmachine: STDERR: 
	I0701 05:01:17.822176   11381 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2 +20000M
	I0701 05:01:17.830095   11381 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:01:17.830121   11381 main.go:141] libmachine: STDERR: 
	I0701 05:01:17.830133   11381 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2
	I0701 05:01:17.830139   11381 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:01:17.830174   11381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a6:e1:00:95:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/test-preload-827000/disk.qcow2
	I0701 05:01:17.831862   11381 main.go:141] libmachine: STDOUT: 
	I0701 05:01:17.831918   11381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:01:17.831930   11381 client.go:171] duration metric: took 289.223334ms to LocalClient.Create
	I0701 05:01:19.146830   11381 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0701 05:01:19.146905   11381 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.098813375s
	I0701 05:01:19.146933   11381 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0701 05:01:19.146980   11381 cache.go:87] Successfully saved all images to host disk.
	I0701 05:01:19.834142   11381 start.go:128] duration metric: took 2.353956042s to createHost
	I0701 05:01:19.834225   11381 start.go:83] releasing machines lock for "test-preload-827000", held for 2.354431166s
	W0701 05:01:19.834607   11381 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:01:19.847109   11381 out.go:177] 
	W0701 05:01:19.850995   11381 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:01:19.851026   11381 out.go:239] * 
	* 
	W0701 05:01:19.853801   11381 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:01:19.862020   11381 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-827000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-01 05:01:19.88059 -0700 PDT m=+700.458867251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-827000 -n test-preload-827000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-827000 -n test-preload-827000: exit status 7 (66.201167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-827000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-827000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-827000
--- FAIL: TestPreload (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-899000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-899000 --memory=2048 --driver=qemu2 : exit status 80 (9.8341695s)

                                                
                                                
-- stdout --
	* [scheduled-stop-899000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-899000" primary control-plane node in "scheduled-stop-899000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-899000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-899000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-899000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-899000" primary control-plane node in "scheduled-stop-899000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-899000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-899000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-01 05:01:29.859789 -0700 PDT m=+710.438108835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-899000 -n scheduled-stop-899000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-899000 -n scheduled-stop-899000: exit status 7 (67.623125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-899000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-899000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-899000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (12.15s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1290447715 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-292000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-292000 --memory=2600 --driver=qemu2 : exit status 80 (9.772802041s)

                                                
                                                
-- stdout --
	* [skaffold-292000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-292000" primary control-plane node in "skaffold-292000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-292000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-292000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-292000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-292000" primary control-plane node in "skaffold-292000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-292000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-292000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-01 05:01:42.015574 -0700 PDT m=+722.593945043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-292000 -n skaffold-292000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-292000 -n skaffold-292000: exit status 7 (64.2215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-292000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-292000
--- FAIL: TestSkaffold (12.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (600.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3543205488 start -p running-upgrade-803000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3543205488 start -p running-upgrade-803000 --memory=2200 --vm-driver=qemu2 : (1m2.796211333s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-803000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-803000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.604521417s)

                                                
                                                
-- stdout --
	* [running-upgrade-803000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-803000" primary control-plane node in "running-upgrade-803000" cluster
	* Updating the running qemu2 "running-upgrade-803000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:03:28.559416   11792 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:03:28.559546   11792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:03:28.559549   11792 out.go:304] Setting ErrFile to fd 2...
	I0701 05:03:28.559551   11792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:03:28.559676   11792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:03:28.560695   11792 out.go:298] Setting JSON to false
	I0701 05:03:28.577132   11792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7377,"bootTime":1719828031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:03:28.577203   11792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:03:28.582403   11792 out.go:177] * [running-upgrade-803000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:03:28.590421   11792 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:03:28.590471   11792 notify.go:220] Checking for updates...
	I0701 05:03:28.598378   11792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:03:28.599719   11792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:03:28.602319   11792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:03:28.605372   11792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:03:28.608384   11792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:03:28.611651   11792 config.go:182] Loaded profile config "running-upgrade-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:03:28.615297   11792 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0701 05:03:28.618403   11792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:03:28.622330   11792 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:03:28.629358   11792 start.go:297] selected driver: qemu2
	I0701 05:03:28.629364   11792 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:03:28.629411   11792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:03:28.631665   11792 cni.go:84] Creating CNI manager for ""
	I0701 05:03:28.631682   11792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:03:28.631709   11792 start.go:340] cluster config:
	{Name:running-upgrade-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:03:28.631756   11792 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:03:28.638396   11792 out.go:177] * Starting "running-upgrade-803000" primary control-plane node in "running-upgrade-803000" cluster
	I0701 05:03:28.642354   11792 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0701 05:03:28.642370   11792 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0701 05:03:28.642378   11792 cache.go:56] Caching tarball of preloaded images
	I0701 05:03:28.642437   11792 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:03:28.642442   11792 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0701 05:03:28.642489   11792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/config.json ...
	I0701 05:03:28.642932   11792 start.go:360] acquireMachinesLock for running-upgrade-803000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:03:28.642972   11792 start.go:364] duration metric: took 33.333µs to acquireMachinesLock for "running-upgrade-803000"
	I0701 05:03:28.642982   11792 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:03:28.642987   11792 fix.go:54] fixHost starting: 
	I0701 05:03:28.643613   11792 fix.go:112] recreateIfNeeded on running-upgrade-803000: state=Running err=<nil>
	W0701 05:03:28.643623   11792 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:03:28.651255   11792 out.go:177] * Updating the running qemu2 "running-upgrade-803000" VM ...
	I0701 05:03:28.655313   11792 machine.go:94] provisionDockerMachine start ...
	I0701 05:03:28.655348   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:28.655449   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:28.655454   11792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 05:03:28.714423   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-803000
	
	I0701 05:03:28.714436   11792 buildroot.go:166] provisioning hostname "running-upgrade-803000"
	I0701 05:03:28.714486   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:28.714596   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:28.714603   11792 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-803000 && echo "running-upgrade-803000" | sudo tee /etc/hostname
	I0701 05:03:28.777459   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-803000
	
	I0701 05:03:28.777514   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:28.777639   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:28.777648   11792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-803000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-803000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-803000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 05:03:28.833843   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 05:03:28.833852   11792 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19166-9507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19166-9507/.minikube}
	I0701 05:03:28.833864   11792 buildroot.go:174] setting up certificates
	I0701 05:03:28.833871   11792 provision.go:84] configureAuth start
	I0701 05:03:28.833875   11792 provision.go:143] copyHostCerts
	I0701 05:03:28.833971   11792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem, removing ...
	I0701 05:03:28.833976   11792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem
	I0701 05:03:28.834102   11792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem (1679 bytes)
	I0701 05:03:28.834270   11792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem, removing ...
	I0701 05:03:28.834274   11792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem
	I0701 05:03:28.834325   11792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem (1082 bytes)
	I0701 05:03:28.834424   11792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem, removing ...
	I0701 05:03:28.834427   11792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem
	I0701 05:03:28.834476   11792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem (1123 bytes)
	I0701 05:03:28.834592   11792 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-803000 san=[127.0.0.1 localhost minikube running-upgrade-803000]
	I0701 05:03:28.945499   11792 provision.go:177] copyRemoteCerts
	I0701 05:03:28.945540   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 05:03:28.945549   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:03:28.977959   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 05:03:28.984435   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0701 05:03:28.995634   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 05:03:29.002437   11792 provision.go:87] duration metric: took 168.548708ms to configureAuth
	I0701 05:03:29.002447   11792 buildroot.go:189] setting minikube options for container-runtime
	I0701 05:03:29.002549   11792 config.go:182] Loaded profile config "running-upgrade-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:03:29.002586   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:29.002671   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:29.002678   11792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 05:03:29.063564   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 05:03:29.063571   11792 buildroot.go:70] root file system type: tmpfs
	I0701 05:03:29.063621   11792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 05:03:29.063665   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:29.063770   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:29.063803   11792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 05:03:29.125898   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 05:03:29.125941   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:29.126057   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:29.126065   11792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 05:03:29.181956   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 05:03:29.181966   11792 machine.go:97] duration metric: took 526.649125ms to provisionDockerMachine
	I0701 05:03:29.181972   11792 start.go:293] postStartSetup for "running-upgrade-803000" (driver="qemu2")
	I0701 05:03:29.181978   11792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 05:03:29.182034   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 05:03:29.182042   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:03:29.213311   11792 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 05:03:29.214535   11792 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 05:03:29.214543   11792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19166-9507/.minikube/addons for local assets ...
	I0701 05:03:29.214623   11792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19166-9507/.minikube/files for local assets ...
	I0701 05:03:29.214741   11792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem -> 100032.pem in /etc/ssl/certs
	I0701 05:03:29.214870   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 05:03:29.217799   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem --> /etc/ssl/certs/100032.pem (1708 bytes)
	I0701 05:03:29.226483   11792 start.go:296] duration metric: took 44.503791ms for postStartSetup
	I0701 05:03:29.226504   11792 fix.go:56] duration metric: took 583.52ms for fixHost
	I0701 05:03:29.226551   11792 main.go:141] libmachine: Using SSH client type: native
	I0701 05:03:29.226682   11792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10446a8e0] 0x10446d140 <nil>  [] 0s} localhost 52135 <nil> <nil>}
	I0701 05:03:29.226689   11792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0701 05:03:29.288344   11792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719835409.165722389
	
	I0701 05:03:29.288353   11792 fix.go:216] guest clock: 1719835409.165722389
	I0701 05:03:29.288357   11792 fix.go:229] Guest: 2024-07-01 05:03:29.165722389 -0700 PDT Remote: 2024-07-01 05:03:29.226505 -0700 PDT m=+0.687302335 (delta=-60.782611ms)
	I0701 05:03:29.288368   11792 fix.go:200] guest clock delta is within tolerance: -60.782611ms
	I0701 05:03:29.288371   11792 start.go:83] releasing machines lock for "running-upgrade-803000", held for 645.397708ms
	I0701 05:03:29.288440   11792 ssh_runner.go:195] Run: cat /version.json
	I0701 05:03:29.288450   11792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 05:03:29.288448   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:03:29.288471   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	W0701 05:03:29.289060   11792 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:52246->127.0.0.1:52135: write: broken pipe
	I0701 05:03:29.289081   11792 retry.go:31] will retry after 345.905397ms: ssh: handshake failed: write tcp 127.0.0.1:52246->127.0.0.1:52135: write: broken pipe
	W0701 05:03:29.320782   11792 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0701 05:03:29.320825   11792 ssh_runner.go:195] Run: systemctl --version
	I0701 05:03:29.322855   11792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 05:03:29.324643   11792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 05:03:29.324669   11792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0701 05:03:29.327655   11792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0701 05:03:29.332145   11792 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 05:03:29.332151   11792 start.go:494] detecting cgroup driver to use...
	I0701 05:03:29.332250   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 05:03:29.339848   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0701 05:03:29.342922   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 05:03:29.346240   11792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 05:03:29.346265   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 05:03:29.349491   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 05:03:29.352543   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 05:03:29.355338   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 05:03:29.358601   11792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 05:03:29.361993   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 05:03:29.365226   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 05:03:29.367985   11792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 05:03:29.370781   11792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 05:03:29.374007   11792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 05:03:29.377126   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:03:29.471169   11792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 05:03:29.481341   11792 start.go:494] detecting cgroup driver to use...
	I0701 05:03:29.481426   11792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 05:03:29.489666   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 05:03:29.497221   11792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 05:03:29.502999   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 05:03:29.508320   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 05:03:29.512752   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 05:03:29.518195   11792 ssh_runner.go:195] Run: which cri-dockerd
	I0701 05:03:29.519538   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 05:03:29.522514   11792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 05:03:29.527761   11792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 05:03:29.624437   11792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 05:03:29.712899   11792 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 05:03:29.712951   11792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 05:03:29.718490   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:03:29.812271   11792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 05:03:32.603847   11792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.791568667s)
	I0701 05:03:32.603913   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 05:03:32.608335   11792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 05:03:32.613902   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 05:03:32.618697   11792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 05:03:32.695170   11792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 05:03:32.770905   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:03:32.855219   11792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 05:03:32.861333   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 05:03:32.866523   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:03:32.943739   11792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 05:03:32.984862   11792 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 05:03:32.984927   11792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 05:03:32.987729   11792 start.go:562] Will wait 60s for crictl version
	I0701 05:03:32.987772   11792 ssh_runner.go:195] Run: which crictl
	I0701 05:03:32.989236   11792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 05:03:33.000643   11792 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0701 05:03:33.000725   11792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 05:03:33.013292   11792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 05:03:33.042423   11792 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0701 05:03:33.042489   11792 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0701 05:03:33.043948   11792 kubeadm.go:877] updating cluster {Name:running-upgrade-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0701 05:03:33.043990   11792 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0701 05:03:33.044031   11792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 05:03:33.054649   11792 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 05:03:33.054665   11792 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0701 05:03:33.054709   11792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 05:03:33.058148   11792 ssh_runner.go:195] Run: which lz4
	I0701 05:03:33.059594   11792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0701 05:03:33.060804   11792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0701 05:03:33.060816   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0701 05:03:33.954654   11792 docker.go:649] duration metric: took 895.098833ms to copy over tarball
	I0701 05:03:33.954708   11792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 05:03:35.139940   11792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185224667s)
	I0701 05:03:35.139956   11792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 05:03:35.155784   11792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 05:03:35.158912   11792 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0701 05:03:35.164676   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:03:35.241865   11792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 05:03:36.429622   11792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.187744958s)
	I0701 05:03:36.429714   11792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 05:03:36.442819   11792 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 05:03:36.442828   11792 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0701 05:03:36.442833   11792 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0701 05:03:36.448363   11792 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:03:36.450394   11792 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:03:36.451708   11792 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:03:36.452213   11792 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:03:36.453420   11792 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:03:36.453730   11792 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:03:36.454747   11792 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:03:36.454794   11792 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:03:36.455446   11792 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:03:36.455779   11792 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0701 05:03:36.456865   11792 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:03:36.456994   11792 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:03:36.458241   11792 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0701 05:03:36.458375   11792 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:03:36.459239   11792 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:03:36.460002   11792 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:03:36.842809   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:03:36.857031   11792 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0701 05:03:36.857058   11792 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:03:36.857112   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:03:36.868020   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0701 05:03:36.874630   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:03:36.876320   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0701 05:03:36.877401   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0701 05:03:36.879613   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:03:36.898597   11792 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0701 05:03:36.898619   11792 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:03:36.898676   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:03:36.906036   11792 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0701 05:03:36.906054   11792 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0701 05:03:36.906065   11792 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:03:36.906056   11792 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:03:36.906120   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:03:36.906172   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0701 05:03:36.906251   11792 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0701 05:03:36.906260   11792 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0701 05:03:36.906276   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0701 05:03:36.916492   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0701 05:03:36.917601   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0701 05:03:36.929923   11792 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0701 05:03:36.930053   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:03:36.938595   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0701 05:03:36.938618   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0701 05:03:36.938654   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0701 05:03:36.938690   11792 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0701 05:03:36.938706   11792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0701 05:03:36.938706   11792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0701 05:03:36.938707   11792 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:03:36.938793   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:03:36.944975   11792 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0701 05:03:36.944997   11792 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:03:36.945049   11792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:03:36.953168   11792 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0701 05:03:36.953173   11792 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0701 05:03:36.953196   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0701 05:03:36.953220   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0701 05:03:36.953294   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0701 05:03:36.980055   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0701 05:03:36.980178   11792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0701 05:03:36.986728   11792 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0701 05:03:36.986742   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0701 05:03:36.998619   11792 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0701 05:03:36.998647   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0701 05:03:37.067994   11792 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0701 05:03:37.079373   11792 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0701 05:03:37.079402   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0701 05:03:37.112505   11792 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0701 05:03:37.112619   11792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:03:37.181917   11792 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0701 05:03:37.181929   11792 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0701 05:03:37.181949   11792 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:03:37.182002   11792 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:03:37.247212   11792 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 05:03:37.247336   11792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0701 05:03:37.248825   11792 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0701 05:03:37.248843   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0701 05:03:37.302951   11792 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0701 05:03:37.302967   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0701 05:03:37.695678   11792 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0701 05:03:37.695702   11792 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0701 05:03:37.695714   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0701 05:03:37.865939   11792 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0701 05:03:37.865980   11792 cache_images.go:92] duration metric: took 1.423142792s to LoadCachedImages
	W0701 05:03:37.866021   11792 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0701 05:03:37.866029   11792 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0701 05:03:37.866083   11792 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-803000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 05:03:37.866163   11792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 05:03:37.887583   11792 cni.go:84] Creating CNI manager for ""
	I0701 05:03:37.887597   11792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:03:37.887605   11792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 05:03:37.887615   11792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-803000 NodeName:running-upgrade-803000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 05:03:37.887688   11792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-803000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 05:03:37.887741   11792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0701 05:03:37.891314   11792 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 05:03:37.891349   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 05:03:37.894763   11792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0701 05:03:37.900705   11792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 05:03:37.905748   11792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0701 05:03:37.910616   11792 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0701 05:03:37.911732   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:03:38.002501   11792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:03:38.008096   11792 certs.go:68] Setting up /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000 for IP: 10.0.2.15
	I0701 05:03:38.008113   11792 certs.go:194] generating shared ca certs ...
	I0701 05:03:38.008123   11792 certs.go:226] acquiring lock for ca certs: {Name:mkd4046b456c87b80b2e6f34890c01f767ca15e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:03:38.008363   11792 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.key
	I0701 05:03:38.008417   11792 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.key
	I0701 05:03:38.008423   11792 certs.go:256] generating profile certs ...
	I0701 05:03:38.008486   11792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.key
	I0701 05:03:38.008501   11792 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.key.2a0b8720
	I0701 05:03:38.008511   11792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.crt.2a0b8720 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0701 05:03:38.067936   11792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.crt.2a0b8720 ...
	I0701 05:03:38.067941   11792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.crt.2a0b8720: {Name:mk47a6038c47e48cca83836c97bbe67cc1369c32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:03:38.068152   11792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.key.2a0b8720 ...
	I0701 05:03:38.068156   11792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.key.2a0b8720: {Name:mk242bc8ce0dfddb05ebac7d23a73abe557413c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:03:38.068278   11792 certs.go:381] copying /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.crt.2a0b8720 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.crt
	I0701 05:03:38.068441   11792 certs.go:385] copying /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.key.2a0b8720 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.key
	I0701 05:03:38.068582   11792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/proxy-client.key
	I0701 05:03:38.068715   11792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003.pem (1338 bytes)
	W0701 05:03:38.068745   11792 certs.go:480] ignoring /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003_empty.pem, impossibly tiny 0 bytes
	I0701 05:03:38.068751   11792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 05:03:38.068771   11792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem (1082 bytes)
	I0701 05:03:38.068789   11792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem (1123 bytes)
	I0701 05:03:38.068806   11792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem (1679 bytes)
	I0701 05:03:38.068843   11792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem (1708 bytes)
	I0701 05:03:38.069169   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 05:03:38.075909   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 05:03:38.083228   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 05:03:38.090621   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0701 05:03:38.098189   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0701 05:03:38.104520   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 05:03:38.111130   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 05:03:38.118554   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 05:03:38.125893   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 05:03:38.132823   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003.pem --> /usr/share/ca-certificates/10003.pem (1338 bytes)
	I0701 05:03:38.139496   11792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem --> /usr/share/ca-certificates/100032.pem (1708 bytes)
	I0701 05:03:38.146426   11792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 05:03:38.151418   11792 ssh_runner.go:195] Run: openssl version
	I0701 05:03:38.153256   11792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 05:03:38.156304   11792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:03:38.157743   11792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:03 /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:03:38.157761   11792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:03:38.159529   11792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 05:03:38.162526   11792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10003.pem && ln -fs /usr/share/ca-certificates/10003.pem /etc/ssl/certs/10003.pem"
	I0701 05:03:38.165856   11792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10003.pem
	I0701 05:03:38.167207   11792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 11:50 /usr/share/ca-certificates/10003.pem
	I0701 05:03:38.167227   11792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10003.pem
	I0701 05:03:38.169102   11792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10003.pem /etc/ssl/certs/51391683.0"
	I0701 05:03:38.171744   11792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100032.pem && ln -fs /usr/share/ca-certificates/100032.pem /etc/ssl/certs/100032.pem"
	I0701 05:03:38.174762   11792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100032.pem
	I0701 05:03:38.176391   11792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 11:50 /usr/share/ca-certificates/100032.pem
	I0701 05:03:38.176414   11792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100032.pem
	I0701 05:03:38.178145   11792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100032.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 05:03:38.181981   11792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 05:03:38.184289   11792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 05:03:38.187948   11792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 05:03:38.189977   11792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 05:03:38.192678   11792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 05:03:38.195473   11792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 05:03:38.197762   11792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 05:03:38.200102   11792 kubeadm.go:391] StartCluster: {Name:running-upgrade-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:03:38.200174   11792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 05:03:38.216558   11792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 05:03:38.219571   11792 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 05:03:38.219578   11792 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 05:03:38.219581   11792 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 05:03:38.219604   11792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 05:03:38.222419   11792 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:03:38.222456   11792 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-803000" does not appear in /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:03:38.222471   11792 kubeconfig.go:62] /Users/jenkins/minikube-integration/19166-9507/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-803000" cluster setting kubeconfig missing "running-upgrade-803000" context setting]
	I0701 05:03:38.222654   11792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:03:38.223594   11792 kapi.go:59] client config for running-upgrade-803000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1057f9090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:03:38.224490   11792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 05:03:38.227388   11792 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-803000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0701 05:03:38.227393   11792 kubeadm.go:1154] stopping kube-system containers ...
	I0701 05:03:38.227447   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 05:03:38.238490   11792 docker.go:483] Stopping containers: [9179c9dfd861 733dd18661a7 c4ff53b84493 7e27c90b48d8 f02556430e2e 4cab4bbd2198 f98621e66e54 1a86db4e3be3 59e69595559e f13cb6673393 03e51df7aae3 3afa9f863197 f3ec9d500953 7134eeb15838 d45896f96e75 4aa6973d6c72 520da3924b64 1e89eabfe128]
	I0701 05:03:38.238557   11792 ssh_runner.go:195] Run: docker stop 9179c9dfd861 733dd18661a7 c4ff53b84493 7e27c90b48d8 f02556430e2e 4cab4bbd2198 f98621e66e54 1a86db4e3be3 59e69595559e f13cb6673393 03e51df7aae3 3afa9f863197 f3ec9d500953 7134eeb15838 d45896f96e75 4aa6973d6c72 520da3924b64 1e89eabfe128
	I0701 05:03:38.302914   11792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 05:03:38.387226   11792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:03:38.391377   11792 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Jul  1 12:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul  1 12:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul  1 12:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul  1 12:03 /etc/kubernetes/scheduler.conf
	
	I0701 05:03:38.391406   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/admin.conf
	I0701 05:03:38.394675   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:03:38.394704   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:03:38.397879   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/kubelet.conf
	I0701 05:03:38.401003   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:03:38.401022   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:03:38.404411   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/controller-manager.conf
	I0701 05:03:38.407523   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:03:38.407544   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:03:38.410154   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/scheduler.conf
	I0701 05:03:38.412711   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:03:38.412732   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:03:38.415630   11792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:03:38.418391   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:03:38.447575   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:03:39.047311   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:03:39.235947   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:03:39.257274   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:03:39.280248   11792 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:03:39.280494   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:03:39.782646   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:03:40.282638   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:03:40.780437   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:03:41.282396   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:03:41.287043   11792 api_server.go:72] duration metric: took 2.006806041s to wait for apiserver process to appear ...
	I0701 05:03:41.287053   11792 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:03:41.287082   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:03:46.287406   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:03:46.287490   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:03:51.289241   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:03:51.289320   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:03:56.290526   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:03:56.290602   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:01.291661   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:01.291699   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:06.292769   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:06.292864   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:11.294660   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:11.294765   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:16.295362   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:16.295456   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:21.297973   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:21.298051   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:26.300599   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:26.300671   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:31.301597   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:31.301645   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:36.304015   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:36.304053   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:41.304976   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:41.305336   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:04:41.334388   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:04:41.334509   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:04:41.352773   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:04:41.352859   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:04:41.367038   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:04:41.367101   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:04:41.378897   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:04:41.378980   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:04:41.398938   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:04:41.399011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:04:41.410085   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:04:41.410153   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:04:41.420555   11792 logs.go:276] 0 containers: []
	W0701 05:04:41.420568   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:04:41.420633   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:04:41.431489   11792 logs.go:276] 0 containers: []
	W0701 05:04:41.431501   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:04:41.431508   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:04:41.431514   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:04:41.468153   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:04:41.468162   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:04:41.479747   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:04:41.479758   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:04:41.505906   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:04:41.505913   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:04:41.530978   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:04:41.530988   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:04:41.546340   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:04:41.546353   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:04:41.560708   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:04:41.560719   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:04:41.565183   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:04:41.565189   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:04:41.644602   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:04:41.644614   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:04:41.658922   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:04:41.658937   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:04:41.676167   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:04:41.676177   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:04:41.687551   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:04:41.687564   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:04:41.699181   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:04:41.699191   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:04:41.713007   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:04:41.713016   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:04:41.724560   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:04:41.724569   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:04:44.238037   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:49.240758   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:49.241172   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:04:49.286337   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:04:49.286498   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:04:49.306043   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:04:49.306148   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:04:49.320358   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:04:49.320424   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:04:49.332997   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:04:49.333071   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:04:49.347697   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:04:49.347768   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:04:49.358483   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:04:49.358542   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:04:49.368234   11792 logs.go:276] 0 containers: []
	W0701 05:04:49.368244   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:04:49.368290   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:04:49.378611   11792 logs.go:276] 0 containers: []
	W0701 05:04:49.378619   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:04:49.378626   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:04:49.378634   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:04:49.419107   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:04:49.419122   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:04:49.432829   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:04:49.432841   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:04:49.437115   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:04:49.437123   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:04:49.449240   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:04:49.449250   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:04:49.461404   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:04:49.461416   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:04:49.475153   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:04:49.475163   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:04:49.499587   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:04:49.499598   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:04:49.514169   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:04:49.514180   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:04:49.527689   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:04:49.527699   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:04:49.541535   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:04:49.541544   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:04:49.559311   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:04:49.559322   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:04:49.597720   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:04:49.597728   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:04:49.615570   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:04:49.615583   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:04:49.642494   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:04:49.642501   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:04:52.158980   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:04:57.161270   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:04:57.161687   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:04:57.204002   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:04:57.204116   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:04:57.224943   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:04:57.225033   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:04:57.238914   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:04:57.238984   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:04:57.250870   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:04:57.250942   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:04:57.261800   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:04:57.261875   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:04:57.274312   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:04:57.274379   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:04:57.285352   11792 logs.go:276] 0 containers: []
	W0701 05:04:57.285367   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:04:57.285425   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:04:57.295504   11792 logs.go:276] 0 containers: []
	W0701 05:04:57.295513   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:04:57.295520   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:04:57.295525   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:04:57.309205   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:04:57.309216   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:04:57.320344   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:04:57.320356   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:04:57.338683   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:04:57.338695   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:04:57.342924   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:04:57.342933   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:04:57.377862   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:04:57.377874   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:04:57.394563   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:04:57.394575   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:04:57.420474   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:04:57.420481   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:04:57.458965   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:04:57.458972   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:04:57.470189   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:04:57.470200   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:04:57.495456   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:04:57.495473   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:04:57.510030   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:04:57.510041   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:04:57.527934   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:04:57.527944   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:04:57.539063   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:04:57.539076   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:04:57.551055   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:04:57.551066   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:00.066932   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:05.069698   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:05.069978   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:05.105568   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:05.105675   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:05.123534   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:05.123621   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:05.136631   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:05.136701   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:05.148283   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:05.148348   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:05.163187   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:05.163259   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:05.173994   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:05.174060   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:05.184620   11792 logs.go:276] 0 containers: []
	W0701 05:05:05.184631   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:05.184693   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:05.195054   11792 logs.go:276] 0 containers: []
	W0701 05:05:05.195066   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:05.195075   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:05.195081   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:05.218962   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:05.218974   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:05.232473   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:05.232485   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:05.252859   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:05.252872   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:05.267184   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:05.267196   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:05.280178   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:05.280191   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:05.318607   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:05.318616   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:05.322977   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:05.322984   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:05.339021   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:05.339032   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:05.373840   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:05.373848   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:05.387875   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:05.387884   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:05.405444   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:05.405454   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:05.419279   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:05.419292   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:05.430640   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:05.430651   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:05.442360   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:05.442373   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:07.968659   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:12.971274   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:12.971495   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:12.995996   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:12.996088   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:13.011746   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:13.011822   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:13.024674   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:13.024743   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:13.035679   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:13.035741   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:13.046106   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:13.046163   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:13.060435   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:13.060501   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:13.073484   11792 logs.go:276] 0 containers: []
	W0701 05:05:13.073498   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:13.073559   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:13.084178   11792 logs.go:276] 0 containers: []
	W0701 05:05:13.084188   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:13.084195   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:13.084200   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:13.118314   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:13.118327   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:13.133151   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:13.133162   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:13.145060   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:13.145070   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:13.158535   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:13.158545   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:13.176759   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:13.176768   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:13.194338   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:13.194350   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:13.218178   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:13.218191   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:13.230994   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:13.231003   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:13.242440   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:13.242451   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:13.247092   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:13.247101   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:13.260008   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:13.260020   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:13.284982   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:13.284992   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:13.296181   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:13.296191   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:13.332508   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:13.332515   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:15.845839   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:20.848686   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:20.849230   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:20.887055   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:20.887192   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:20.908849   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:20.908967   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:20.925203   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:20.925286   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:20.937882   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:20.937956   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:20.948740   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:20.948802   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:20.959259   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:20.959325   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:20.974190   11792 logs.go:276] 0 containers: []
	W0701 05:05:20.974202   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:20.974255   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:20.986908   11792 logs.go:276] 0 containers: []
	W0701 05:05:20.986918   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:20.986947   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:20.986952   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:21.000710   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:21.000723   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:21.012952   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:21.012961   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:21.017217   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:21.017227   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:21.040613   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:21.040624   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:21.054432   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:21.054449   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:21.072094   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:21.072106   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:21.106537   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:21.106549   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:21.128806   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:21.128818   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:21.140719   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:21.140730   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:21.179464   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:21.179476   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:21.190675   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:21.190688   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:21.202258   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:21.202269   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:21.214181   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:21.214191   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:21.228374   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:21.228386   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:23.755036   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:28.757561   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:28.757813   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:28.783500   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:28.783624   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:28.800296   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:28.800376   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:28.813279   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:28.813347   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:28.825484   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:28.825552   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:28.835625   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:28.835682   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:28.846437   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:28.846502   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:28.856326   11792 logs.go:276] 0 containers: []
	W0701 05:05:28.856336   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:28.856388   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:28.866806   11792 logs.go:276] 0 containers: []
	W0701 05:05:28.866818   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:28.866825   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:28.866830   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:28.884613   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:28.884623   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:28.897735   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:28.897747   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:28.902238   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:28.902247   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:28.915735   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:28.915746   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:28.928978   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:28.928988   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:28.940032   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:28.940041   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:28.951393   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:28.951403   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:28.985335   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:28.985347   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:29.008599   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:29.008608   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:29.020141   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:29.020152   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:29.043862   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:29.043868   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:29.058188   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:29.058200   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:29.069516   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:29.069525   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:29.105760   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:29.105770   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:31.623625   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:36.626503   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:36.626847   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:36.666427   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:36.666566   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:36.688358   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:36.688471   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:36.704447   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:36.704516   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:36.717353   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:36.717420   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:36.728892   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:36.728957   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:36.739389   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:36.739454   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:36.753917   11792 logs.go:276] 0 containers: []
	W0701 05:05:36.753930   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:36.753986   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:36.764467   11792 logs.go:276] 0 containers: []
	W0701 05:05:36.764480   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:36.764488   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:36.764494   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:36.788315   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:36.788322   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:36.801677   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:36.801690   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:36.819686   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:36.819697   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:36.836970   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:36.836980   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:36.848224   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:36.848235   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:36.859854   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:36.859866   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:36.874577   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:36.874602   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:36.888068   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:36.888077   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:36.899648   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:36.899659   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:36.911489   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:36.911500   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:36.949956   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:36.949966   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:36.955081   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:36.955088   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:36.966490   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:36.966504   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:37.012713   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:37.012724   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:39.538670   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:44.540313   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:44.540421   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:44.551968   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:44.552040   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:44.563249   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:44.563326   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:44.574271   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:44.574341   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:44.585421   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:44.585490   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:44.596928   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:44.597000   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:44.608255   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:44.608321   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:44.619475   11792 logs.go:276] 0 containers: []
	W0701 05:05:44.619489   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:44.619549   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:44.634744   11792 logs.go:276] 0 containers: []
	W0701 05:05:44.634758   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:44.634766   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:44.634772   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:44.674502   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:44.674510   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:44.688869   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:44.688879   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:44.700574   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:44.700585   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:44.714814   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:44.714824   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:44.726582   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:44.726593   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:44.746776   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:44.746787   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:44.771819   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:44.771827   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:44.783304   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:44.783315   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:44.787741   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:44.787747   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:44.823562   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:44.823573   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:44.838512   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:44.838522   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:44.852248   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:44.852259   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:44.864792   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:44.864802   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:44.888829   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:44.888844   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:47.411953   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:05:52.414705   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:05:52.414896   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:05:52.426545   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:05:52.426620   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:05:52.442691   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:05:52.442761   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:05:52.456199   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:05:52.456279   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:05:52.466731   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:05:52.466797   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:05:52.477385   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:05:52.477445   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:05:52.488089   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:05:52.488159   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:05:52.498799   11792 logs.go:276] 0 containers: []
	W0701 05:05:52.498812   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:05:52.498868   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:05:52.510063   11792 logs.go:276] 0 containers: []
	W0701 05:05:52.510076   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:05:52.510083   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:05:52.510088   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:05:52.523605   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:05:52.523615   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:05:52.542078   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:05:52.542088   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:05:52.583207   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:05:52.583218   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:05:52.608559   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:05:52.608568   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:05:52.620745   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:05:52.620756   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:05:52.634991   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:05:52.635000   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:05:52.647011   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:05:52.647021   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:05:52.651273   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:05:52.651282   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:05:52.665784   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:05:52.665793   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:05:52.680032   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:05:52.680040   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:05:52.691996   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:05:52.692006   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:05:52.703890   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:05:52.703904   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:05:52.728423   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:05:52.728433   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:05:52.765947   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:05:52.765959   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:05:55.288600   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:00.288952   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:00.289203   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:00.318255   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:00.318356   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:00.333433   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:00.333512   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:00.345540   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:00.345614   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:00.356676   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:00.356752   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:00.366884   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:00.366947   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:00.379532   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:00.379606   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:00.393399   11792 logs.go:276] 0 containers: []
	W0701 05:06:00.393415   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:00.393476   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:00.403395   11792 logs.go:276] 0 containers: []
	W0701 05:06:00.403407   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:00.403414   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:00.403419   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:00.427122   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:00.427131   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:00.431125   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:00.431133   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:00.456964   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:00.456974   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:00.470790   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:00.470799   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:00.482937   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:00.482948   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:00.494873   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:00.494884   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:00.512389   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:00.512399   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:00.523696   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:00.523707   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:00.558182   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:00.558192   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:00.571979   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:00.571988   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:00.584408   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:00.584417   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:00.598301   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:00.598309   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:00.634670   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:00.634679   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:00.652266   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:00.652278   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:03.165569   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:08.167623   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:08.167738   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:08.183521   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:08.183610   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:08.195796   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:08.195872   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:08.207590   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:08.207672   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:08.224166   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:08.224244   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:08.237371   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:08.237454   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:08.249863   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:08.249941   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:08.263564   11792 logs.go:276] 0 containers: []
	W0701 05:06:08.263578   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:08.263641   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:08.277074   11792 logs.go:276] 0 containers: []
	W0701 05:06:08.277088   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:08.277099   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:08.277105   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:08.298602   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:08.298614   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:08.318539   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:08.318552   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:08.360258   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:08.360284   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:08.365840   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:08.365852   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:08.391666   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:08.391680   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:08.405784   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:08.405797   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:08.444064   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:08.444075   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:08.468714   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:08.468724   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:08.481155   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:08.481166   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:08.506947   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:08.506954   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:08.520548   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:08.520559   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:08.534804   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:08.534815   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:08.546074   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:08.546087   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:08.558256   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:08.558271   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:11.081203   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:16.082327   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:16.082561   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:16.105316   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:16.105417   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:16.122195   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:16.122269   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:16.134453   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:16.134524   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:16.145084   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:16.145147   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:16.155771   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:16.155841   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:16.166376   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:16.166444   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:16.177407   11792 logs.go:276] 0 containers: []
	W0701 05:06:16.177416   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:16.177471   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:16.187615   11792 logs.go:276] 0 containers: []
	W0701 05:06:16.187627   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:16.187636   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:16.187641   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:16.212889   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:16.212896   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:16.224864   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:16.224874   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:16.238869   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:16.238880   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:16.252896   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:16.252912   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:16.272989   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:16.273006   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:16.284827   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:16.284839   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:16.323437   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:16.323447   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:16.327605   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:16.327612   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:16.342260   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:16.342272   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:16.353683   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:16.353694   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:16.365122   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:16.365132   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:16.399443   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:16.399457   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:16.422552   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:16.422566   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:16.436933   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:16.436947   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:18.957879   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:23.960075   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:23.960172   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:23.971897   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:23.971967   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:23.983686   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:23.983758   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:23.999650   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:23.999719   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:24.010307   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:24.010381   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:24.021163   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:24.021235   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:24.032332   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:24.032398   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:24.043033   11792 logs.go:276] 0 containers: []
	W0701 05:06:24.043045   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:24.043102   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:24.053546   11792 logs.go:276] 0 containers: []
	W0701 05:06:24.053559   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:24.053568   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:24.053573   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:24.067167   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:24.067177   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:24.078239   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:24.078250   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:24.090102   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:24.090113   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:24.101473   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:24.101482   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:24.113080   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:24.113091   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:24.127623   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:24.127635   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:24.141536   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:24.141552   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:24.158605   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:24.158620   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:24.170116   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:24.170130   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:24.195141   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:24.195149   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:24.208796   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:24.208807   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:24.235090   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:24.235099   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:24.274512   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:24.274521   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:24.278895   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:24.278903   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:26.823873   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:31.826680   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:31.827128   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:31.864197   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:31.864334   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:31.886203   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:31.886328   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:31.901677   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:31.901751   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:31.915509   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:31.915582   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:31.926962   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:31.927030   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:31.938074   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:31.938146   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:31.948113   11792 logs.go:276] 0 containers: []
	W0701 05:06:31.948123   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:31.948178   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:31.958380   11792 logs.go:276] 0 containers: []
	W0701 05:06:31.958389   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:31.958398   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:31.958403   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:31.997395   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:31.997406   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:32.012854   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:32.012866   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:32.024163   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:32.024175   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:32.059314   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:32.059326   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:32.070986   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:32.070998   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:32.085381   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:32.085392   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:32.099513   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:32.099525   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:32.123758   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:32.123770   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:32.144001   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:32.144009   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:32.156810   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:32.156822   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:32.169005   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:32.169018   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:32.173397   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:32.173405   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:32.186853   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:32.186867   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:32.204306   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:32.204315   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:34.731559   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:39.734152   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:39.734380   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:39.751305   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:39.751392   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:39.764682   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:39.764759   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:39.775964   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:39.776037   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:39.786669   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:39.786740   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:39.796751   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:39.796813   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:39.807457   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:39.807528   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:39.817850   11792 logs.go:276] 0 containers: []
	W0701 05:06:39.817860   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:39.817916   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:39.831244   11792 logs.go:276] 0 containers: []
	W0701 05:06:39.831254   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:39.831262   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:39.831267   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:39.835753   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:39.835762   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:39.853166   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:39.853181   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:39.877353   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:39.877363   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:39.914928   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:39.914939   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:39.926179   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:39.926189   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:39.938304   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:39.938314   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:39.956726   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:39.956736   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:39.972277   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:39.972287   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:39.989788   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:39.989802   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:40.001308   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:40.001321   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:40.012256   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:40.012268   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:40.047309   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:40.047322   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:40.061551   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:40.061560   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:40.072820   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:40.072834   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:42.598594   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:47.600877   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:47.600986   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:47.613755   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:47.613833   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:47.626180   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:47.626252   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:47.638038   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:47.638102   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:47.654641   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:47.654722   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:47.666739   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:47.666814   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:47.679036   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:47.679103   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:47.694811   11792 logs.go:276] 0 containers: []
	W0701 05:06:47.694826   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:47.694889   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:47.706383   11792 logs.go:276] 0 containers: []
	W0701 05:06:47.706394   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:47.706403   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:47.706410   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:47.726764   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:47.726783   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:47.740424   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:47.740437   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:47.765135   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:47.765153   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:47.780264   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:47.780281   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:47.798880   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:47.798897   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:47.824496   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:47.824513   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:47.841430   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:47.841447   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:47.856043   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:47.856058   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:47.900561   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:47.900573   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:47.905260   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:47.905267   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:47.920603   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:47.920616   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:47.932775   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:47.932785   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:47.947824   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:47.947839   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:47.960208   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:47.960218   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:50.501883   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:55.504571   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:55.505029   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:55.543385   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:55.543526   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:55.567391   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:55.567512   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:55.581693   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:55.581770   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:55.593710   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:55.593782   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:55.604889   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:55.604962   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:55.615217   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:55.615285   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:55.625603   11792 logs.go:276] 0 containers: []
	W0701 05:06:55.625612   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:55.625665   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:55.635946   11792 logs.go:276] 0 containers: []
	W0701 05:06:55.635959   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:55.635966   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:55.635970   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:55.659702   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:55.659713   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:55.676946   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:55.676960   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:55.689045   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:55.689058   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:55.727282   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:55.727290   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:55.731881   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:55.731887   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:55.746107   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:55.746120   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:55.757516   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:55.757527   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:55.774598   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:55.774608   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:55.786341   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:55.786356   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:55.824926   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:55.824936   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:55.838535   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:55.838546   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:55.854331   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:55.854341   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:55.868132   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:55.868144   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:55.879603   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:55.879613   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:58.404546   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:03.404969   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:03.405034   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:03.417249   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:03.417337   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:03.428846   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:03.428897   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:03.449470   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:03.449532   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:03.466435   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:03.466495   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:03.483916   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:03.483979   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:03.495355   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:03.495422   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:03.507055   11792 logs.go:276] 0 containers: []
	W0701 05:07:03.507068   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:03.507100   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:03.518693   11792 logs.go:276] 0 containers: []
	W0701 05:07:03.518706   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:03.518714   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:03.518719   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:03.533553   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:03.533564   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:03.546188   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:03.546202   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:03.568191   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:03.568204   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:03.581822   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:03.581839   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:03.622178   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:03.622193   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:03.626953   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:03.626963   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:03.641090   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:03.641102   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:03.677197   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:03.677208   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:03.701282   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:03.701296   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:03.718220   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:03.718234   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:03.733844   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:03.733855   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:03.749311   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:03.749322   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:03.767225   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:03.767239   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:03.781759   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:03.781771   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:06.309646   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:11.311810   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:11.311968   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:11.322759   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:11.322825   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:11.332904   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:11.332978   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:11.347302   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:11.347364   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:11.358406   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:11.358481   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:11.368631   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:11.368697   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:11.378835   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:11.378901   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:11.388940   11792 logs.go:276] 0 containers: []
	W0701 05:07:11.388952   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:11.389011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:11.399393   11792 logs.go:276] 0 containers: []
	W0701 05:07:11.399405   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:11.399412   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:11.399418   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:11.410573   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:11.410584   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:11.445859   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:11.445869   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:11.463815   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:11.463825   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:11.475990   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:11.476001   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:11.488338   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:11.488347   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:11.506317   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:11.506327   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:11.518201   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:11.518212   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:11.523169   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:11.523179   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:11.547299   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:11.547309   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:11.560764   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:11.560774   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:11.575114   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:11.575124   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:11.586655   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:11.586665   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:11.625373   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:11.625382   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:11.640366   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:11.640376   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:14.165592   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:19.167949   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:19.168169   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:19.189690   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:19.189795   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:19.204920   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:19.205002   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:19.218075   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:19.218178   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:19.229214   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:19.229281   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:19.241157   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:19.241228   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:19.251762   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:19.251821   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:19.262368   11792 logs.go:276] 0 containers: []
	W0701 05:07:19.262382   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:19.262449   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:19.275321   11792 logs.go:276] 0 containers: []
	W0701 05:07:19.275332   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:19.275341   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:19.275346   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:19.293356   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:19.293368   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:19.305311   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:19.305325   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:19.318884   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:19.318898   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:19.330243   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:19.330258   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:19.354839   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:19.354851   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:19.369055   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:19.369069   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:19.380925   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:19.380938   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:19.395696   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:19.395706   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:19.407147   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:19.407158   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:19.431231   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:19.431238   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:19.470236   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:19.470244   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:19.474766   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:19.474771   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:19.516014   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:19.516025   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:19.530676   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:19.530685   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:22.046257   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:27.046870   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:27.047118   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:27.075392   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:27.075513   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:27.093309   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:27.093389   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:27.106557   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:27.106630   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:27.119560   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:27.119635   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:27.130073   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:27.130145   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:27.141362   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:27.141434   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:27.151810   11792 logs.go:276] 0 containers: []
	W0701 05:07:27.151822   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:27.151878   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:27.162525   11792 logs.go:276] 0 containers: []
	W0701 05:07:27.162537   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:27.162546   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:27.162552   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:27.175790   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:27.175800   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:27.193368   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:27.193380   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:27.205457   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:27.205469   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:27.209702   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:27.209714   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:27.233870   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:27.233881   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:27.245112   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:27.245121   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:27.267998   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:27.268005   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:27.305554   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:27.305565   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:27.330666   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:27.330674   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:27.344878   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:27.344887   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:27.356640   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:27.356649   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:27.371145   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:27.371154   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:27.382705   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:27.382715   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:27.419756   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:27.419764   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:29.935869   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:34.938083   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:34.938245   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:34.950256   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:34.950338   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:34.962481   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:34.962548   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:34.973265   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:34.973334   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:34.984333   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:34.984407   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:34.994549   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:34.994625   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:35.005276   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:35.005348   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:35.019404   11792 logs.go:276] 0 containers: []
	W0701 05:07:35.019416   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:35.019477   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:35.030069   11792 logs.go:276] 0 containers: []
	W0701 05:07:35.030081   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:35.030089   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:35.030093   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:35.041620   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:35.041630   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:35.065514   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:35.065525   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:35.081346   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:35.081361   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:35.093196   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:35.093206   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:35.113936   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:35.113947   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:35.125523   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:35.125535   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:35.138900   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:35.138917   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:35.143414   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:35.143426   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:35.176652   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:35.176663   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:35.190725   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:35.190736   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:35.202096   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:35.202107   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:35.226669   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:35.226677   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:35.238274   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:35.238285   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:35.277685   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:35.277698   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:37.793240   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:42.794908   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:42.795086   11792 kubeadm.go:591] duration metric: took 4m4.576529083s to restartPrimaryControlPlane
	W0701 05:07:42.795222   11792 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0701 05:07:42.795281   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0701 05:07:43.781266   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 05:07:43.786118   11792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:07:43.788893   11792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:07:43.791429   11792 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 05:07:43.791435   11792 kubeadm.go:156] found existing configuration files:
	
	I0701 05:07:43.791456   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/admin.conf
	I0701 05:07:43.793772   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 05:07:43.793792   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:07:43.796834   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/kubelet.conf
	I0701 05:07:43.799269   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 05:07:43.799288   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:07:43.801985   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/controller-manager.conf
	I0701 05:07:43.804794   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 05:07:43.804815   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:07:43.807723   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/scheduler.conf
	I0701 05:07:43.810299   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 05:07:43.810325   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:07:43.813277   11792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0701 05:07:43.831531   11792 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0701 05:07:43.831579   11792 kubeadm.go:309] [preflight] Running pre-flight checks
	I0701 05:07:43.879261   11792 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 05:07:43.879312   11792 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 05:07:43.879352   11792 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 05:07:43.928682   11792 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 05:07:43.936867   11792 out.go:204]   - Generating certificates and keys ...
	I0701 05:07:43.936901   11792 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0701 05:07:43.936932   11792 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0701 05:07:43.936978   11792 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0701 05:07:43.937032   11792 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0701 05:07:43.937066   11792 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0701 05:07:43.937096   11792 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0701 05:07:43.937135   11792 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0701 05:07:43.937167   11792 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0701 05:07:43.937201   11792 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0701 05:07:43.937238   11792 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0701 05:07:43.937258   11792 kubeadm.go:309] [certs] Using the existing "sa" key
	I0701 05:07:43.937284   11792 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 05:07:44.022949   11792 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 05:07:44.122248   11792 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 05:07:44.252992   11792 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 05:07:44.291491   11792 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 05:07:44.321619   11792 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 05:07:44.321971   11792 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 05:07:44.322026   11792 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0701 05:07:44.413970   11792 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 05:07:44.418269   11792 out.go:204]   - Booting up control plane ...
	I0701 05:07:44.418314   11792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 05:07:44.418351   11792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 05:07:44.418386   11792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 05:07:44.418438   11792 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 05:07:44.418540   11792 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 05:07:48.918533   11792 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501637 seconds
	I0701 05:07:48.918595   11792 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 05:07:48.921984   11792 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 05:07:49.441682   11792 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 05:07:49.441914   11792 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-803000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 05:07:49.946207   11792 kubeadm.go:309] [bootstrap-token] Using token: 6zv076.ks0is4rdrwcaqafy
	I0701 05:07:49.952496   11792 out.go:204]   - Configuring RBAC rules ...
	I0701 05:07:49.952554   11792 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 05:07:49.952604   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 05:07:49.955293   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 05:07:49.960127   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 05:07:49.960994   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 05:07:49.961944   11792 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 05:07:49.964989   11792 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 05:07:50.135044   11792 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0701 05:07:50.351013   11792 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0701 05:07:50.351962   11792 kubeadm.go:309] 
	I0701 05:07:50.351998   11792 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0701 05:07:50.352002   11792 kubeadm.go:309] 
	I0701 05:07:50.352039   11792 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0701 05:07:50.352043   11792 kubeadm.go:309] 
	I0701 05:07:50.352063   11792 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0701 05:07:50.352109   11792 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 05:07:50.352141   11792 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 05:07:50.352146   11792 kubeadm.go:309] 
	I0701 05:07:50.352181   11792 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0701 05:07:50.352195   11792 kubeadm.go:309] 
	I0701 05:07:50.352224   11792 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 05:07:50.352227   11792 kubeadm.go:309] 
	I0701 05:07:50.352264   11792 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0701 05:07:50.352316   11792 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 05:07:50.352355   11792 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 05:07:50.352360   11792 kubeadm.go:309] 
	I0701 05:07:50.352413   11792 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 05:07:50.352458   11792 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0701 05:07:50.352460   11792 kubeadm.go:309] 
	I0701 05:07:50.352502   11792 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6zv076.ks0is4rdrwcaqafy \
	I0701 05:07:50.352568   11792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 \
	I0701 05:07:50.352582   11792 kubeadm.go:309] 	--control-plane 
	I0701 05:07:50.352585   11792 kubeadm.go:309] 
	I0701 05:07:50.352648   11792 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0701 05:07:50.352652   11792 kubeadm.go:309] 
	I0701 05:07:50.352701   11792 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6zv076.ks0is4rdrwcaqafy \
	I0701 05:07:50.352772   11792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 
	I0701 05:07:50.352849   11792 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 05:07:50.352865   11792 cni.go:84] Creating CNI manager for ""
	I0701 05:07:50.352873   11792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:07:50.356764   11792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 05:07:50.366763   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 05:07:50.369732   11792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0701 05:07:50.374635   11792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 05:07:50.374673   11792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 05:07:50.374701   11792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-803000 minikube.k8s.io/updated_at=2024_07_01T05_07_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c minikube.k8s.io/name=running-upgrade-803000 minikube.k8s.io/primary=true
	I0701 05:07:50.417983   11792 kubeadm.go:1107] duration metric: took 43.3425ms to wait for elevateKubeSystemPrivileges
	I0701 05:07:50.417988   11792 ops.go:34] apiserver oom_adj: -16
	W0701 05:07:50.418006   11792 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0701 05:07:50.418009   11792 kubeadm.go:393] duration metric: took 4m12.218980959s to StartCluster
	I0701 05:07:50.418018   11792 settings.go:142] acquiring lock: {Name:mk8a5112b51a742a29c931ccf59ae86bde00a5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:50.418186   11792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:07:50.418552   11792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:50.418770   11792 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:07:50.418814   11792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 05:07:50.418845   11792 config.go:182] Loaded profile config "running-upgrade-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:07:50.418847   11792 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-803000"
	I0701 05:07:50.418860   11792 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-803000"
	W0701 05:07:50.418864   11792 addons.go:243] addon storage-provisioner should already be in state true
	I0701 05:07:50.418868   11792 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-803000"
	I0701 05:07:50.418875   11792 host.go:66] Checking if "running-upgrade-803000" exists ...
	I0701 05:07:50.418883   11792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-803000"
	I0701 05:07:50.419752   11792 kapi.go:59] client config for running-upgrade-803000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1057f9090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:07:50.419882   11792 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-803000"
	W0701 05:07:50.419886   11792 addons.go:243] addon default-storageclass should already be in state true
	I0701 05:07:50.419898   11792 host.go:66] Checking if "running-upgrade-803000" exists ...
	I0701 05:07:50.421791   11792 out.go:177] * Verifying Kubernetes components...
	I0701 05:07:50.422079   11792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 05:07:50.426230   11792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 05:07:50.426239   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:07:50.429826   11792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:50.433731   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:07:50.437839   11792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:07:50.437844   11792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 05:07:50.437849   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:07:50.523021   11792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:07:50.528243   11792 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:07:50.528282   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:50.531977   11792 api_server.go:72] duration metric: took 113.197292ms to wait for apiserver process to appear ...
	I0701 05:07:50.531986   11792 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:07:50.531992   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:50.556902   11792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 05:07:50.575147   11792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:07:55.534195   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:55.534275   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:00.535016   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:00.535068   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:05.535724   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:05.535794   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:10.536855   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:10.536915   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:15.538418   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:15.538466   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:20.540372   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:20.540401   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0701 05:08:20.938530   11792 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0701 05:08:20.942798   11792 out.go:177] * Enabled addons: storage-provisioner
	I0701 05:08:20.954753   11792 addons.go:510] duration metric: took 30.536080041s for enable addons: enabled=[storage-provisioner]
	I0701 05:08:25.542323   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:25.542406   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:30.544980   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:30.545011   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:35.547328   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:35.547410   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:40.550021   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:40.550044   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:45.550928   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:45.550977   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:50.553268   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:50.553363   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:50.563887   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:08:50.563955   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:50.574175   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:08:50.574246   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:50.584576   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:08:50.584642   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:50.595156   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:08:50.595232   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:50.605276   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:08:50.605342   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:50.615657   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:08:50.615750   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:50.625957   11792 logs.go:276] 0 containers: []
	W0701 05:08:50.625968   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:50.626023   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:50.638124   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:08:50.638140   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:08:50.638146   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:08:50.650202   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:08:50.650212   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:08:50.667945   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:08:50.667955   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:08:50.679839   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:08:50.679855   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:50.691273   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:50.691284   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:50.727545   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:50.727553   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:50.732226   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:08:50.732233   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:08:50.746406   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:08:50.746416   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:08:50.758181   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:08:50.758191   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:08:50.772680   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:50.772697   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:50.796070   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:50.796079   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:50.834712   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:08:50.834725   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:08:50.848464   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:08:50.848474   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:08:53.361858   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:58.364203   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:58.364352   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:58.380364   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:08:58.380466   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:58.393334   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:08:58.393407   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:58.404433   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:08:58.404493   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:58.414938   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:08:58.415006   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:58.425491   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:08:58.425554   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:58.436003   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:08:58.436060   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:58.446303   11792 logs.go:276] 0 containers: []
	W0701 05:08:58.446315   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:58.446366   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:58.456576   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:08:58.456592   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:58.456598   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:58.461007   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:08:58.461016   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:08:58.474752   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:08:58.474763   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:08:58.488286   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:08:58.488295   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:08:58.499334   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:08:58.499343   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:08:58.513866   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:58.513878   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:58.537111   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:58.537119   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:58.570648   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:08:58.570657   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:08:58.581750   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:08:58.581761   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:08:58.594723   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:08:58.594737   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:08:58.612464   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:08:58.612475   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:08:58.623423   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:08:58.623433   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:58.634745   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:58.634756   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:01.172267   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:06.174052   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:06.174247   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:06.189775   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:06.189860   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:06.202889   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:06.202969   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:06.215909   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:06.215991   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:06.226410   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:06.226482   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:06.236990   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:06.237056   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:06.247180   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:06.247238   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:06.257462   11792 logs.go:276] 0 containers: []
	W0701 05:09:06.257475   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:06.257525   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:06.268022   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:06.268041   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:06.268046   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:06.304024   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:06.304040   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:06.308905   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:06.308910   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:06.326019   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:06.326030   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:06.337350   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:06.337360   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:06.360100   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:06.360108   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:06.371692   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:06.371702   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:06.383806   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:06.383816   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:06.421203   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:06.421214   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:06.435156   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:06.435169   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:06.448824   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:06.448835   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:06.460002   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:06.460014   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:06.472114   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:06.472126   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:08.989068   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:13.991553   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:13.991806   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:14.015629   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:14.015732   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:14.032348   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:14.032422   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:14.046292   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:14.046378   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:14.058483   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:14.058550   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:14.068999   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:14.069060   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:14.079119   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:14.079176   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:14.089240   11792 logs.go:276] 0 containers: []
	W0701 05:09:14.089251   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:14.089305   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:14.099779   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:14.099795   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:14.099805   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:14.111364   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:14.111375   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:14.125623   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:14.125633   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:14.137361   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:14.137375   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:14.158175   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:14.158187   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:14.169638   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:14.169650   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:14.193975   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:14.193987   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:14.234067   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:14.234079   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:14.239162   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:14.239169   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:14.253781   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:14.253790   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:14.267465   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:14.267477   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:14.279377   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:14.279388   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:14.290742   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:14.290754   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:16.826539   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:21.828835   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:21.829011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:21.849147   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:21.849238   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:21.862095   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:21.862159   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:21.873256   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:21.873325   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:21.883744   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:21.883811   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:21.893995   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:21.894064   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:21.906000   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:21.906068   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:21.924132   11792 logs.go:276] 0 containers: []
	W0701 05:09:21.924144   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:21.924203   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:21.937831   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:21.937851   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:21.937857   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:21.972829   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:21.972845   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:22.010526   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:22.010536   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:22.025765   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:22.025775   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:22.039828   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:22.039839   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:22.051676   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:22.051688   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:22.066407   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:22.066418   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:22.078009   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:22.078018   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:22.082702   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:22.082710   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:22.094356   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:22.094369   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:22.105979   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:22.105992   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:22.122991   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:22.123003   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:22.146129   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:22.146137   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:24.659547   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:29.661944   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:29.662122   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:29.680677   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:29.680773   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:29.694859   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:29.694936   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:29.706738   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:29.706806   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:29.717319   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:29.717384   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:29.728028   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:29.728096   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:29.738846   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:29.738913   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:29.750109   11792 logs.go:276] 0 containers: []
	W0701 05:09:29.750121   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:29.750175   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:29.760884   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:29.760903   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:29.760908   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:29.773394   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:29.773406   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:29.807991   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:29.808003   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:29.812433   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:29.812442   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:29.847679   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:29.847693   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:29.862026   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:29.862039   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:29.875702   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:29.875713   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:29.887954   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:29.887968   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:29.899390   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:29.899402   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:29.914939   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:29.914952   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:29.929635   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:29.929646   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:29.949242   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:29.949254   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:29.967211   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:29.967221   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:32.494003   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:37.496739   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:37.497138   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:37.535196   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:37.535325   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:37.559453   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:37.559545   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:37.574625   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:37.574700   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:37.586725   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:37.586792   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:37.597834   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:37.597917   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:37.609809   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:37.609881   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:37.620737   11792 logs.go:276] 0 containers: []
	W0701 05:09:37.620748   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:37.620807   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:37.632062   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:37.632078   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:37.632083   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:37.647012   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:37.647026   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:37.659356   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:37.659368   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:37.675039   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:37.675049   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:37.687999   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:37.688009   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:37.705903   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:37.705913   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:37.731245   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:37.731253   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:37.735646   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:37.735653   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:37.750950   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:37.750960   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:37.763372   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:37.763382   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:37.775957   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:37.775969   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:37.787978   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:37.787992   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:37.823329   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:37.823341   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:40.362177   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:45.363693   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:45.363885   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:45.380563   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:45.380644   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:45.393358   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:45.393423   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:45.404584   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:45.404655   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:45.415540   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:45.415609   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:45.426508   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:45.426579   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:45.437750   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:45.437814   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:45.448477   11792 logs.go:276] 0 containers: []
	W0701 05:09:45.448491   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:45.448551   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:45.459734   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:45.459751   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:45.459756   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:45.471785   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:45.471795   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:45.496458   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:45.496465   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:45.508847   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:45.508859   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:45.558964   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:45.558976   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:45.571756   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:45.571769   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:45.583802   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:45.583813   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:45.599076   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:45.599089   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:45.617137   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:45.617147   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:45.654063   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:45.654081   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:45.658910   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:45.658920   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:45.673850   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:45.673862   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:45.688006   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:45.688016   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:48.201183   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:53.201579   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:53.201752   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:53.224887   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:53.224969   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:53.236740   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:53.236809   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:53.247895   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:53.247969   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:53.259275   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:53.259339   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:53.270252   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:53.270321   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:53.281930   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:53.282001   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:53.292771   11792 logs.go:276] 0 containers: []
	W0701 05:09:53.292780   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:53.292834   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:53.304135   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:53.304149   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:53.304155   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:53.341224   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:53.341234   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:53.353545   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:53.353555   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:53.369591   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:53.369601   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:53.393037   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:53.393046   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:53.398004   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:53.398010   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:53.412749   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:53.412759   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:53.427148   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:53.427158   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:53.439593   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:53.439603   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:53.455143   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:53.455157   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:53.473032   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:53.473042   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:53.485641   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:53.485652   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:53.497435   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:53.497444   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:56.032180   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:01.033546   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:01.033747   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:01.046927   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:01.047002   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:01.058053   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:01.058126   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:01.068455   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:10:01.068522   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:01.080385   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:01.080453   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:01.094978   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:01.095049   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:01.106188   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:01.106255   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:01.117131   11792 logs.go:276] 0 containers: []
	W0701 05:10:01.117143   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:01.117203   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:01.128012   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:01.128028   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:01.128033   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:01.139992   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:01.140001   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:01.177869   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:01.177882   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:01.191791   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:01.191804   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:01.204158   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:01.204168   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:01.215852   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:01.215863   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:01.231544   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:01.231557   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:01.253111   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:01.253120   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:01.286300   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:01.286309   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:01.290461   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:01.290469   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:01.305188   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:01.305198   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:01.317263   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:01.317273   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:01.342239   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:01.342248   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:03.855358   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:08.857577   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:08.857844   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:08.893077   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:08.893173   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:08.911412   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:08.911494   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:08.924412   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:08.924489   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:08.935318   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:08.935385   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:08.946504   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:08.946573   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:08.957535   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:08.957601   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:08.968410   11792 logs.go:276] 0 containers: []
	W0701 05:10:08.968420   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:08.968470   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:08.979026   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:08.979045   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:08.979051   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:08.993124   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:08.993136   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:09.004534   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:09.004546   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:09.038302   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:09.038311   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:09.042753   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:09.042760   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:09.064425   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:09.064434   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:09.079732   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:09.079746   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:09.091502   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:09.091516   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:09.116052   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:09.116059   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:09.151159   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:09.151169   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:09.163451   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:09.163465   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:09.181142   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:09.181151   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:09.192191   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:09.192201   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:09.203343   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:09.203354   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:09.214529   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:09.214543   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:11.727964   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:16.730319   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:16.730470   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:16.745107   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:16.745187   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:16.756388   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:16.756452   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:16.766982   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:16.767053   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:16.783820   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:16.783887   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:16.794657   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:16.794722   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:16.805406   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:16.805470   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:16.816045   11792 logs.go:276] 0 containers: []
	W0701 05:10:16.816059   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:16.816111   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:16.826124   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:16.826141   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:16.826146   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:16.846538   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:16.846560   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:16.873290   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:16.873305   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:16.908460   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:16.908471   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:16.923097   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:16.923106   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:16.934758   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:16.934767   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:16.946357   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:16.946366   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:16.957467   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:16.957477   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:16.971552   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:16.971561   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:16.983173   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:16.983182   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:17.016174   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:17.016184   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:17.051019   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:17.051043   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:17.056794   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:17.056805   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:17.068364   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:17.068375   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:17.080205   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:17.080217   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:19.594446   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:24.596834   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:24.596965   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:24.609757   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:24.609834   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:24.622832   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:24.622902   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:24.633573   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:24.633641   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:24.644379   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:24.644444   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:24.654751   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:24.654822   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:24.674775   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:24.674846   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:24.685595   11792 logs.go:276] 0 containers: []
	W0701 05:10:24.685613   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:24.685665   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:24.696327   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:24.696345   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:24.696353   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:24.708065   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:24.708075   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:24.719965   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:24.719975   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:24.735115   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:24.735127   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:24.746736   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:24.746746   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:24.771306   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:24.771315   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:24.782962   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:24.782975   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:24.818643   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:24.818654   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:24.830205   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:24.830216   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:24.842087   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:24.842096   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:24.856053   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:24.856063   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:24.867862   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:24.867872   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:24.884810   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:24.884819   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:24.902120   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:24.902130   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:24.935414   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:24.935422   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:27.441982   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:32.444348   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:32.444579   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:32.466845   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:32.466946   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:32.482688   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:32.482773   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:32.495849   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:32.495923   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:32.508690   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:32.508748   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:32.519025   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:32.519097   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:32.529505   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:32.529570   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:32.539869   11792 logs.go:276] 0 containers: []
	W0701 05:10:32.539881   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:32.539936   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:32.552257   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:32.552275   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:32.552280   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:32.566703   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:32.566713   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:32.578118   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:32.578128   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:32.589420   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:32.589430   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:32.603801   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:32.603814   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:32.616070   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:32.616080   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:32.620909   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:32.620917   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:32.635320   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:32.635331   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:32.652653   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:32.652663   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:32.665079   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:32.665091   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:32.705393   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:32.705404   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:32.718582   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:32.718591   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:32.730487   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:32.730497   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:32.742082   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:32.742091   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:32.766749   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:32.766758   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:35.302561   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:40.305122   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:40.305340   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:40.323301   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:40.323385   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:40.336504   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:40.336571   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:40.348003   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:40.348076   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:40.358188   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:40.358260   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:40.368740   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:40.368808   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:40.379197   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:40.379263   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:40.388968   11792 logs.go:276] 0 containers: []
	W0701 05:10:40.388985   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:40.389036   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:40.403611   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:40.403627   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:40.403633   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:40.437663   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:40.437677   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:40.455096   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:40.455107   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:40.466888   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:40.466900   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:40.478218   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:40.478229   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:40.503495   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:40.503505   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:40.508162   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:40.508173   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:40.520188   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:40.520201   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:40.531687   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:40.531697   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:40.543595   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:40.543605   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:40.556476   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:40.556489   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:40.568113   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:40.568123   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:40.604056   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:40.604066   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:40.617891   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:40.617900   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:40.633277   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:40.633285   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:43.153493   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:48.155898   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:48.156243   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:48.192340   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:48.192469   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:48.217618   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:48.217701   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:48.244300   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:48.244371   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:48.255525   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:48.255596   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:48.271022   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:48.271089   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:48.282268   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:48.282344   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:48.295893   11792 logs.go:276] 0 containers: []
	W0701 05:10:48.295905   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:48.295966   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:48.307196   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:48.307214   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:48.307219   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:48.343269   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:48.343278   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:48.357975   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:48.357984   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:48.369536   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:48.369549   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:48.383823   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:48.383834   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:48.414314   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:48.414325   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:48.425676   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:48.425686   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:48.438987   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:48.438998   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:48.475692   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:48.475702   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:48.492817   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:48.492827   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:48.514379   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:48.514391   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:48.527833   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:48.527843   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:48.532857   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:48.532865   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:48.552284   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:48.552294   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:48.564504   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:48.564514   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:51.089810   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:56.092370   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:56.092594   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:56.111299   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:56.111400   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:56.125508   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:56.125578   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:56.140652   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:56.140718   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:56.151292   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:56.151354   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:56.161908   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:56.161975   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:56.172772   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:56.172836   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:56.186819   11792 logs.go:276] 0 containers: []
	W0701 05:10:56.186829   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:56.186877   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:56.198088   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:56.198106   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:56.198111   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:56.214605   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:56.214615   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:56.219536   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:56.219545   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:56.234036   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:56.234045   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:56.245640   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:56.245651   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:56.261978   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:56.261987   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:56.286762   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:56.286769   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:56.298471   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:56.298481   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:56.333760   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:56.333774   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:56.345468   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:56.345479   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:56.357631   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:56.357647   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:56.392868   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:56.392875   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:56.405856   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:56.405870   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:56.423500   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:56.423508   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:56.441025   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:56.441036   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:58.954740   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:03.957083   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:03.957265   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:03.974225   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:03.974314   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:03.995487   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:03.995547   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:04.005619   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:04.005695   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:04.019713   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:04.019774   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:04.031096   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:04.031167   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:04.041485   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:04.041545   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:04.051574   11792 logs.go:276] 0 containers: []
	W0701 05:11:04.051587   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:04.051646   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:04.062130   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:04.062151   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:04.062156   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:04.105762   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:04.105773   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:04.118765   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:04.118774   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:04.132880   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:04.132891   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:04.144985   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:04.144996   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:04.160242   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:04.160251   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:04.177826   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:04.177836   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:04.197951   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:04.197960   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:04.223409   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:04.223419   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:04.259099   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:04.259107   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:04.275001   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:04.275011   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:04.280112   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:04.280121   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:04.294105   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:04.294118   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:04.309261   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:04.309275   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:04.321214   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:04.321227   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:06.840620   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:11.842880   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:11.843061   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:11.854995   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:11.855070   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:11.866492   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:11.866565   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:11.877794   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:11.877867   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:11.889049   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:11.889112   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:11.903752   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:11.903822   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:11.915568   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:11.915637   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:11.926534   11792 logs.go:276] 0 containers: []
	W0701 05:11:11.926547   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:11.926609   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:11.937853   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:11.937870   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:11.937875   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:11.975182   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:11.975197   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:11.989925   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:11.989935   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:12.001926   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:12.001937   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:12.014497   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:12.014507   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:12.032614   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:12.032625   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:12.044478   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:12.044490   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:12.057116   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:12.057127   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:12.071927   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:12.071936   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:12.099097   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:12.099107   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:12.111073   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:12.111085   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:12.115903   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:12.115912   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:12.151007   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:12.151017   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:12.163737   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:12.163747   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:12.177999   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:12.178009   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:14.692421   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:19.700949   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:19.701216   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:19.723986   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:19.724110   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:19.739635   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:19.739716   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:19.753026   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:19.753091   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:19.764052   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:19.764113   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:19.774445   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:19.774504   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:19.785138   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:19.785196   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:19.795370   11792 logs.go:276] 0 containers: []
	W0701 05:11:19.795383   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:19.795438   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:19.806270   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:19.806286   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:19.806292   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:19.820591   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:19.820604   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:19.832237   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:19.832248   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:19.844825   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:19.844835   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:19.856244   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:19.856255   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:19.868382   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:19.868392   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:19.882302   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:19.882311   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:19.893860   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:19.893870   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:19.905934   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:19.905945   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:19.920659   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:19.920668   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:19.945816   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:19.945823   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:19.981239   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:19.981248   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:19.986197   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:19.986207   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:20.022066   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:20.022077   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:20.033648   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:20.033658   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:22.557324   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:27.568000   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:27.568087   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:27.578954   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:27.579025   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:27.589447   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:27.589514   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:27.600299   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:27.600373   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:27.611269   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:27.611339   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:27.621936   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:27.622013   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:27.632559   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:27.632628   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:27.644427   11792 logs.go:276] 0 containers: []
	W0701 05:11:27.644438   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:27.644491   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:27.655427   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:27.655445   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:27.655450   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:27.679148   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:27.679156   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:27.693890   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:27.693900   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:27.706419   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:27.706430   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:27.718291   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:27.718301   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:27.730470   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:27.730482   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:27.748970   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:27.748980   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:27.784754   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:27.784767   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:27.821142   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:27.821155   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:27.825885   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:27.825891   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:27.837626   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:27.837638   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:27.849292   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:27.849302   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:27.861505   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:27.861516   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:27.876043   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:27.876053   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:27.890575   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:27.890585   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:30.408014   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:35.415785   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:35.416011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:35.447268   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:35.447371   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:35.463992   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:35.464071   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:35.479167   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:35.479247   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:35.490720   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:35.490790   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:35.500772   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:35.500844   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:35.514557   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:35.514627   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:35.524716   11792 logs.go:276] 0 containers: []
	W0701 05:11:35.524728   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:35.524785   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:35.535071   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:35.535090   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:35.535095   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:35.546282   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:35.546295   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:35.558360   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:35.558373   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:35.593161   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:35.593174   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:35.607796   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:35.607806   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:35.619566   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:35.619581   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:35.634344   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:35.634355   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:35.646007   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:35.646022   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:35.663660   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:35.663670   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:35.676536   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:35.676549   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:35.680999   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:35.681008   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:35.719402   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:35.719415   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:35.733191   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:35.733201   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:35.745096   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:35.745105   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:35.756883   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:35.756893   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:38.283211   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:43.288831   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:43.288954   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:43.300402   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:43.300479   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:43.312449   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:43.312518   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:43.323576   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:43.323641   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:43.334250   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:43.334315   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:43.344997   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:43.345064   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:43.363245   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:43.363308   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:43.373427   11792 logs.go:276] 0 containers: []
	W0701 05:11:43.373440   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:43.373490   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:43.387510   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:43.387525   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:43.387531   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:43.422696   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:43.422703   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:43.426915   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:43.426920   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:43.440524   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:43.440535   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:43.451705   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:43.451716   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:43.463331   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:43.463342   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:43.478608   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:43.478619   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:43.490146   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:43.490161   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:43.501560   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:43.501569   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:43.513186   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:43.513195   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:43.550158   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:43.550168   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:43.562285   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:43.562299   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:43.577226   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:43.577237   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:43.593372   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:43.593386   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:43.612171   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:43.612180   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:46.139384   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:51.143270   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:51.147670   11792 out.go:177] 
	W0701 05:11:51.151688   11792 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0701 05:11:51.151697   11792 out.go:239] * 
	* 
	W0701 05:11:51.152350   11792 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:11:51.166597   11792 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-803000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-01 05:11:51.253647 -0700 PDT m=+1331.794007168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-803000 -n running-upgrade-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-803000 -n running-upgrade-803000: exit status 2 (15.714282417s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-803000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-972000          | force-systemd-flag-972000 | jenkins | v1.33.1 | 01 Jul 24 05:01 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-076000              | force-systemd-env-076000  | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-076000           | force-systemd-env-076000  | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT | 01 Jul 24 05:02 PDT |
	| start   | -p docker-flags-122000                | docker-flags-122000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-972000             | force-systemd-flag-972000 | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-972000          | force-systemd-flag-972000 | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT | 01 Jul 24 05:02 PDT |
	| start   | -p cert-expiration-556000             | cert-expiration-556000    | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-122000 ssh               | docker-flags-122000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-122000 ssh               | docker-flags-122000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-122000                | docker-flags-122000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT | 01 Jul 24 05:02 PDT |
	| start   | -p cert-options-638000                | cert-options-638000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-638000 ssh               | cert-options-638000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-638000 -- sudo        | cert-options-638000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-638000                | cert-options-638000       | jenkins | v1.33.1 | 01 Jul 24 05:02 PDT | 01 Jul 24 05:02 PDT |
	| start   | -p running-upgrade-803000             | minikube                  | jenkins | v1.26.0 | 01 Jul 24 05:02 PDT | 01 Jul 24 05:03 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-803000             | running-upgrade-803000    | jenkins | v1.33.1 | 01 Jul 24 05:03 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-556000             | cert-expiration-556000    | jenkins | v1.33.1 | 01 Jul 24 05:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-556000             | cert-expiration-556000    | jenkins | v1.33.1 | 01 Jul 24 05:05 PDT | 01 Jul 24 05:05 PDT |
	| start   | -p kubernetes-upgrade-161000          | kubernetes-upgrade-161000 | jenkins | v1.33.1 | 01 Jul 24 05:05 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-161000          | kubernetes-upgrade-161000 | jenkins | v1.33.1 | 01 Jul 24 05:05 PDT | 01 Jul 24 05:05 PDT |
	| start   | -p kubernetes-upgrade-161000          | kubernetes-upgrade-161000 | jenkins | v1.33.1 | 01 Jul 24 05:05 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-161000          | kubernetes-upgrade-161000 | jenkins | v1.33.1 | 01 Jul 24 05:05 PDT | 01 Jul 24 05:05 PDT |
	| start   | -p stopped-upgrade-841000             | minikube                  | jenkins | v1.26.0 | 01 Jul 24 05:05 PDT | 01 Jul 24 05:06 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-841000 stop           | minikube                  | jenkins | v1.26.0 | 01 Jul 24 05:06 PDT | 01 Jul 24 05:06 PDT |
	| start   | -p stopped-upgrade-841000             | stopped-upgrade-841000    | jenkins | v1.33.1 | 01 Jul 24 05:06 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 05:06:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 05:06:37.128534   11947 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:06:37.128714   11947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:06:37.128718   11947 out.go:304] Setting ErrFile to fd 2...
	I0701 05:06:37.128721   11947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:06:37.128870   11947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:06:37.130072   11947 out.go:298] Setting JSON to false
	I0701 05:06:37.149587   11947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7566,"bootTime":1719828031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:06:37.149665   11947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:06:37.154020   11947 out.go:177] * [stopped-upgrade-841000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:06:37.160137   11947 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:06:37.160192   11947 notify.go:220] Checking for updates...
	I0701 05:06:37.167054   11947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:06:37.170034   11947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:06:37.173121   11947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:06:37.176023   11947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:06:37.179084   11947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:06:37.182363   11947 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:06:37.185999   11947 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0701 05:06:37.189080   11947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:06:37.193032   11947 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:06:37.200069   11947 start.go:297] selected driver: qemu2
	I0701 05:06:37.200078   11947 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:06:37.200143   11947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:06:37.202566   11947 cni.go:84] Creating CNI manager for ""
	I0701 05:06:37.202586   11947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:06:37.202621   11947 start.go:340] cluster config:
	{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:06:37.202679   11947 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:06:37.210029   11947 out.go:177] * Starting "stopped-upgrade-841000" primary control-plane node in "stopped-upgrade-841000" cluster
	I0701 05:06:37.214045   11947 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0701 05:06:37.214065   11947 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0701 05:06:37.214076   11947 cache.go:56] Caching tarball of preloaded images
	I0701 05:06:37.214157   11947 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:06:37.214163   11947 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0701 05:06:37.214228   11947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0701 05:06:37.214696   11947 start.go:360] acquireMachinesLock for stopped-upgrade-841000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:06:37.214737   11947 start.go:364] duration metric: took 34.125µs to acquireMachinesLock for "stopped-upgrade-841000"
	I0701 05:06:37.214748   11947 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:06:37.214753   11947 fix.go:54] fixHost starting: 
	I0701 05:06:37.214876   11947 fix.go:112] recreateIfNeeded on stopped-upgrade-841000: state=Stopped err=<nil>
	W0701 05:06:37.214885   11947 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:06:37.222112   11947 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-841000" ...
	I0701 05:06:34.731559   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:37.226089   11947 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52333-:22,hostfwd=tcp::52334-:2376,hostname=stopped-upgrade-841000 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/disk.qcow2
	I0701 05:06:37.273786   11947 main.go:141] libmachine: STDOUT: 
	I0701 05:06:37.273811   11947 main.go:141] libmachine: STDERR: 
	I0701 05:06:37.273816   11947 main.go:141] libmachine: Waiting for VM to start (ssh -p 52333 docker@127.0.0.1)...
	I0701 05:06:39.734152   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:39.734380   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:39.751305   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:39.751392   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:39.764682   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:39.764759   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:39.775964   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:39.776037   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:39.786669   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:39.786740   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:39.796751   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:39.796813   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:39.807457   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:39.807528   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:39.817850   11792 logs.go:276] 0 containers: []
	W0701 05:06:39.817860   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:39.817916   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:39.831244   11792 logs.go:276] 0 containers: []
	W0701 05:06:39.831254   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:39.831262   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:39.831267   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:39.835753   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:39.835762   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:39.853166   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:39.853181   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:39.877353   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:39.877363   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:39.914928   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:39.914939   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:39.926179   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:39.926189   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:39.938304   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:39.938314   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:39.956726   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:39.956736   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:39.972277   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:39.972287   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:39.989788   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:39.989802   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:40.001308   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:40.001321   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:40.012256   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:40.012268   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:40.047309   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:40.047322   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:40.061551   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:40.061560   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:40.072820   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:40.072834   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:42.598594   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:47.600877   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:47.600986   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:47.613755   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:47.613833   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:47.626180   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:47.626252   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:47.638038   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:47.638102   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:47.654641   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:47.654722   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:47.666739   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:47.666814   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:47.679036   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:47.679103   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:47.694811   11792 logs.go:276] 0 containers: []
	W0701 05:06:47.694826   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:47.694889   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:47.706383   11792 logs.go:276] 0 containers: []
	W0701 05:06:47.706394   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:47.706403   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:47.706410   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:47.726764   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:47.726783   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:47.740424   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:47.740437   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:47.765135   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:47.765153   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:47.780264   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:47.780281   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:47.798880   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:47.798897   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:47.824496   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:47.824513   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:47.841430   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:47.841447   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:47.856043   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:47.856058   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:47.900561   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:47.900573   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:47.905260   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:47.905267   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:47.920603   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:47.920616   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:47.932775   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:47.932785   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:47.947824   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:47.947839   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:47.960208   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:47.960218   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:50.501883   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:56.396607   11947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0701 05:06:56.397106   11947 machine.go:94] provisionDockerMachine start ...
	I0701 05:06:56.397200   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.397476   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.397486   11947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 05:06:56.462254   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 05:06:56.462287   11947 buildroot.go:166] provisioning hostname "stopped-upgrade-841000"
	I0701 05:06:56.462366   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.462554   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.462561   11947 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-841000 && echo "stopped-upgrade-841000" | sudo tee /etc/hostname
	I0701 05:06:56.525951   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-841000
	
	I0701 05:06:56.526001   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.526140   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.526150   11947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 05:06:56.583129   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 05:06:56.583143   11947 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19166-9507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19166-9507/.minikube}
	I0701 05:06:56.583151   11947 buildroot.go:174] setting up certificates
	I0701 05:06:56.583159   11947 provision.go:84] configureAuth start
	I0701 05:06:56.583165   11947 provision.go:143] copyHostCerts
	I0701 05:06:56.583255   11947 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem, removing ...
	I0701 05:06:56.583262   11947 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem
	I0701 05:06:56.583365   11947 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem (1082 bytes)
	I0701 05:06:56.583560   11947 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem, removing ...
	I0701 05:06:56.583564   11947 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem
	I0701 05:06:56.583618   11947 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem (1123 bytes)
	I0701 05:06:56.583733   11947 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem, removing ...
	I0701 05:06:56.583737   11947 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem
	I0701 05:06:56.583790   11947 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem (1679 bytes)
	I0701 05:06:56.583878   11947 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-841000 san=[127.0.0.1 localhost minikube stopped-upgrade-841000]
	I0701 05:06:56.701912   11947 provision.go:177] copyRemoteCerts
	I0701 05:06:56.701955   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 05:06:56.701964   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:06:56.730000   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 05:06:56.736849   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 05:06:56.746011   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0701 05:06:56.752797   11947 provision.go:87] duration metric: took 169.627458ms to configureAuth
	I0701 05:06:56.752805   11947 buildroot.go:189] setting minikube options for container-runtime
	I0701 05:06:56.752935   11947 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:06:56.752966   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.753068   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.753072   11947 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 05:06:56.804644   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 05:06:56.804656   11947 buildroot.go:70] root file system type: tmpfs
	I0701 05:06:56.804706   11947 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 05:06:56.804761   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.804865   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.804897   11947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 05:06:56.860187   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 05:06:56.860246   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.860352   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.860360   11947 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 05:06:55.504571   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:06:55.505029   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:06:55.543385   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:06:55.543526   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:06:55.567391   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:06:55.567512   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:06:55.581693   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:06:55.581770   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:06:55.593710   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:06:55.593782   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:06:55.604889   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:06:55.604962   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:06:55.615217   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:06:55.615285   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:06:55.625603   11792 logs.go:276] 0 containers: []
	W0701 05:06:55.625612   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:06:55.625665   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:06:55.635946   11792 logs.go:276] 0 containers: []
	W0701 05:06:55.635959   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:06:55.635966   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:06:55.635970   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:06:55.659702   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:06:55.659713   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:06:55.676946   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:06:55.676960   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:06:55.689045   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:06:55.689058   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:06:55.727282   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:06:55.727290   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:06:55.731881   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:06:55.731887   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:06:55.746107   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:06:55.746120   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:06:55.757516   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:06:55.757527   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:06:55.774598   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:06:55.774608   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:06:55.786341   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:06:55.786356   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:06:55.824926   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:06:55.824936   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:06:55.838535   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:06:55.838546   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:06:55.854331   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:06:55.854341   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:06:55.868132   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:06:55.868144   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:06:55.879603   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:06:55.879613   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:06:58.404546   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:06:57.231922   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 05:06:57.231936   11947 machine.go:97] duration metric: took 834.824292ms to provisionDockerMachine
	I0701 05:06:57.231942   11947 start.go:293] postStartSetup for "stopped-upgrade-841000" (driver="qemu2")
	I0701 05:06:57.231949   11947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 05:06:57.232017   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 05:06:57.232026   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:06:57.261286   11947 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 05:06:57.262585   11947 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 05:06:57.262593   11947 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19166-9507/.minikube/addons for local assets ...
	I0701 05:06:57.262672   11947 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19166-9507/.minikube/files for local assets ...
	I0701 05:06:57.262790   11947 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem -> 100032.pem in /etc/ssl/certs
	I0701 05:06:57.262927   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 05:06:57.265956   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem --> /etc/ssl/certs/100032.pem (1708 bytes)
	I0701 05:06:57.272947   11947 start.go:296] duration metric: took 40.999208ms for postStartSetup
	I0701 05:06:57.272960   11947 fix.go:56] duration metric: took 20.058293875s for fixHost
	I0701 05:06:57.273003   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:57.273125   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:57.273130   11947 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 05:06:57.324390   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719835617.217066337
	
	I0701 05:06:57.324399   11947 fix.go:216] guest clock: 1719835617.217066337
	I0701 05:06:57.324402   11947 fix.go:229] Guest: 2024-07-01 05:06:57.217066337 -0700 PDT Remote: 2024-07-01 05:06:57.272962 -0700 PDT m=+20.178339418 (delta=-55.895663ms)
	I0701 05:06:57.324413   11947 fix.go:200] guest clock delta is within tolerance: -55.895663ms
	I0701 05:06:57.324416   11947 start.go:83] releasing machines lock for "stopped-upgrade-841000", held for 20.109759042s
	I0701 05:06:57.324485   11947 ssh_runner.go:195] Run: cat /version.json
	I0701 05:06:57.324499   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:06:57.324485   11947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 05:06:57.324585   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	W0701 05:06:57.325129   11947 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52459->127.0.0.1:52333: read: connection reset by peer
	I0701 05:06:57.325149   11947 retry.go:31] will retry after 274.068046ms: ssh: handshake failed: read tcp 127.0.0.1:52459->127.0.0.1:52333: read: connection reset by peer
	W0701 05:06:57.352775   11947 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0701 05:06:57.352820   11947 ssh_runner.go:195] Run: systemctl --version
	I0701 05:06:57.354391   11947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 05:06:57.355961   11947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 05:06:57.355985   11947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0701 05:06:57.358983   11947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0701 05:06:57.363518   11947 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 05:06:57.363532   11947 start.go:494] detecting cgroup driver to use...
	I0701 05:06:57.363611   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 05:06:57.370452   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0701 05:06:57.373775   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 05:06:57.376578   11947 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 05:06:57.376602   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 05:06:57.379550   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 05:06:57.382971   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 05:06:57.386503   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 05:06:57.389772   11947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 05:06:57.392488   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 05:06:57.395458   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 05:06:57.398870   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 05:06:57.402261   11947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 05:06:57.404817   11947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 05:06:57.407571   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:57.489974   11947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 05:06:57.495643   11947 start.go:494] detecting cgroup driver to use...
	I0701 05:06:57.495698   11947 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 05:06:57.502003   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 05:06:57.510567   11947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 05:06:57.516460   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 05:06:57.520807   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 05:06:57.525116   11947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 05:06:57.584571   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 05:06:57.589738   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 05:06:57.595080   11947 ssh_runner.go:195] Run: which cri-dockerd
	I0701 05:06:57.596407   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 05:06:57.599144   11947 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 05:06:57.604351   11947 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 05:06:57.677606   11947 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 05:06:57.748345   11947 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 05:06:57.748399   11947 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 05:06:57.754673   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:57.823854   11947 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 05:06:58.985222   11947 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161355584s)
	I0701 05:06:58.985284   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 05:06:58.991654   11947 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 05:06:58.997945   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 05:06:59.002498   11947 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 05:06:59.085024   11947 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 05:06:59.165448   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:59.243195   11947 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 05:06:59.249662   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 05:06:59.254257   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:59.317881   11947 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 05:06:59.358091   11947 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 05:06:59.358164   11947 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 05:06:59.360281   11947 start.go:562] Will wait 60s for crictl version
	I0701 05:06:59.360315   11947 ssh_runner.go:195] Run: which crictl
	I0701 05:06:59.361559   11947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 05:06:59.376843   11947 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0701 05:06:59.376911   11947 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 05:06:59.393550   11947 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 05:06:59.412401   11947 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0701 05:06:59.412464   11947 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0701 05:06:59.413748   11947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 05:06:59.417203   11947 kubeadm.go:877] updating cluster {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0701 05:06:59.417246   11947 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0701 05:06:59.417284   11947 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 05:06:59.427632   11947 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 05:06:59.427640   11947 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0701 05:06:59.427686   11947 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 05:06:59.431029   11947 ssh_runner.go:195] Run: which lz4
	I0701 05:06:59.432219   11947 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0701 05:06:59.433425   11947 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0701 05:06:59.433435   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0701 05:07:00.374915   11947 docker.go:649] duration metric: took 942.731292ms to copy over tarball
	I0701 05:07:00.374973   11947 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 05:07:01.558430   11947 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1834495s)
	I0701 05:07:01.558455   11947 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 05:07:01.574562   11947 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 05:07:01.577619   11947 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0701 05:07:01.582495   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:07:01.662621   11947 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 05:07:03.404969   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:03.405034   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:03.417249   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:03.417337   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:03.428846   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:03.428897   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:03.449470   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:03.449532   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:03.466435   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:03.466495   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:03.483916   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:03.483979   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:03.495355   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:03.495422   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:03.507055   11792 logs.go:276] 0 containers: []
	W0701 05:07:03.507068   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:03.507100   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:03.518693   11792 logs.go:276] 0 containers: []
	W0701 05:07:03.518706   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:03.518714   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:03.518719   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:03.533553   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:03.533564   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:03.546188   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:03.546202   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:03.278701   11947 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.616067458s)
	I0701 05:07:03.278789   11947 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 05:07:03.297382   11947 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 05:07:03.297393   11947 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0701 05:07:03.297398   11947 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0701 05:07:03.303373   11947 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.305287   11947 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.307277   11947 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.307341   11947 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0701 05:07:03.308898   11947 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.308970   11947 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.310230   11947 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0701 05:07:03.310388   11947 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.311852   11947 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.311865   11947 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.313241   11947 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.313327   11947 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.314273   11947 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.314295   11947 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.315169   11947 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.316058   11947 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.695491   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0701 05:07:03.695959   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.704586   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.707982   11947 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0701 05:07:03.708006   11947 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0701 05:07:03.708056   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0701 05:07:03.712800   11947 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0701 05:07:03.712822   11947 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.712871   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.723160   11947 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0701 05:07:03.723197   11947 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.723296   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.734786   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0701 05:07:03.735057   11947 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0701 05:07:03.735959   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0701 05:07:03.744293   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0701 05:07:03.744321   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0701 05:07:03.744493   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0701 05:07:03.744596   11947 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0701 05:07:03.747638   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0701 05:07:03.747661   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0701 05:07:03.762441   11947 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0701 05:07:03.762466   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0701 05:07:03.768132   11947 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0701 05:07:03.768260   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.780065   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.781983   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.803667   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.858138   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0701 05:07:03.858166   11947 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0701 05:07:03.858188   11947 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.858248   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.858249   11947 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0701 05:07:03.858296   11947 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.858325   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.862664   11947 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0701 05:07:03.862687   11947 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.862667   11947 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0701 05:07:03.862708   11947 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.862750   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.862750   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.889840   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0701 05:07:03.899867   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0701 05:07:03.899994   11947 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0701 05:07:03.906579   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0701 05:07:03.906590   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0701 05:07:03.917072   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0701 05:07:03.917111   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0701 05:07:03.935893   11947 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0701 05:07:03.936016   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.979053   11947 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0701 05:07:03.979076   11947 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.979128   11947 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:04.008109   11947 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0701 05:07:04.008127   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0701 05:07:04.027094   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 05:07:04.027221   11947 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0701 05:07:04.106231   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0701 05:07:04.106239   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0701 05:07:04.106266   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0701 05:07:04.114255   11947 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0701 05:07:04.114269   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0701 05:07:04.275123   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0701 05:07:04.275158   11947 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0701 05:07:04.275209   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0701 05:07:04.516973   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0701 05:07:04.517013   11947 cache_images.go:92] duration metric: took 1.219612333s to LoadCachedImages
	W0701 05:07:04.517054   11947 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0701 05:07:04.517061   11947 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0701 05:07:04.517116   11947 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-841000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 05:07:04.517176   11947 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 05:07:04.530828   11947 cni.go:84] Creating CNI manager for ""
	I0701 05:07:04.530841   11947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:07:04.530846   11947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 05:07:04.530854   11947 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-841000 NodeName:stopped-upgrade-841000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 05:07:04.530923   11947 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-841000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 05:07:04.530977   11947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0701 05:07:04.534578   11947 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 05:07:04.534618   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 05:07:04.537342   11947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0701 05:07:04.542053   11947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 05:07:04.547027   11947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0701 05:07:04.552583   11947 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0701 05:07:04.553704   11947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 05:07:04.557215   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:07:04.634852   11947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:07:04.641045   11947 certs.go:68] Setting up /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000 for IP: 10.0.2.15
	I0701 05:07:04.641057   11947 certs.go:194] generating shared ca certs ...
	I0701 05:07:04.641066   11947 certs.go:226] acquiring lock for ca certs: {Name:mkd4046b456c87b80b2e6f34890c01f767ca15e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.641241   11947 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.key
	I0701 05:07:04.641292   11947 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.key
	I0701 05:07:04.641299   11947 certs.go:256] generating profile certs ...
	I0701 05:07:04.641382   11947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.key
	I0701 05:07:04.641400   11947 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301
	I0701 05:07:04.641423   11947 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0701 05:07:04.765449   11947 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 ...
	I0701 05:07:04.765464   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301: {Name:mkd89e4947fa3c5d3ba4b598d83619c33a5b2c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.769882   11947 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 ...
	I0701 05:07:04.769891   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301: {Name:mk2fda541721dec72ff3d6d7d66d18f65003a0f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.770028   11947 certs.go:381] copying /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt
	I0701 05:07:04.770174   11947 certs.go:385] copying /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key
	I0701 05:07:04.770335   11947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/proxy-client.key
	I0701 05:07:04.770467   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003.pem (1338 bytes)
	W0701 05:07:04.770496   11947 certs.go:480] ignoring /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003_empty.pem, impossibly tiny 0 bytes
	I0701 05:07:04.770501   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 05:07:04.770528   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem (1082 bytes)
	I0701 05:07:04.770550   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem (1123 bytes)
	I0701 05:07:04.770576   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem (1679 bytes)
	I0701 05:07:04.770624   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem (1708 bytes)
	I0701 05:07:04.770962   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 05:07:04.778649   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 05:07:04.786271   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 05:07:04.793263   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0701 05:07:04.799909   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0701 05:07:04.806777   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 05:07:04.813419   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 05:07:04.820363   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 05:07:04.827257   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem --> /usr/share/ca-certificates/100032.pem (1708 bytes)
	I0701 05:07:04.834397   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 05:07:04.840880   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003.pem --> /usr/share/ca-certificates/10003.pem (1338 bytes)
	I0701 05:07:04.847614   11947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 05:07:04.852938   11947 ssh_runner.go:195] Run: openssl version
	I0701 05:07:04.854761   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100032.pem && ln -fs /usr/share/ca-certificates/100032.pem /etc/ssl/certs/100032.pem"
	I0701 05:07:04.858176   11947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100032.pem
	I0701 05:07:04.859600   11947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 11:50 /usr/share/ca-certificates/100032.pem
	I0701 05:07:04.859619   11947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100032.pem
	I0701 05:07:04.861392   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100032.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 05:07:04.864155   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 05:07:04.867365   11947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:07:04.868881   11947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:03 /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:07:04.868901   11947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:07:04.870625   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 05:07:04.873743   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10003.pem && ln -fs /usr/share/ca-certificates/10003.pem /etc/ssl/certs/10003.pem"
	I0701 05:07:04.876485   11947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10003.pem
	I0701 05:07:04.877856   11947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 11:50 /usr/share/ca-certificates/10003.pem
	I0701 05:07:04.877877   11947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10003.pem
	I0701 05:07:04.879648   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10003.pem /etc/ssl/certs/51391683.0"
	I0701 05:07:04.883023   11947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 05:07:04.884620   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 05:07:04.886768   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 05:07:04.888850   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 05:07:04.890924   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 05:07:04.892710   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 05:07:04.894415   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 05:07:04.896565   11947 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:07:04.896638   11947 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 05:07:04.906593   11947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 05:07:04.909570   11947 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 05:07:04.909576   11947 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 05:07:04.909579   11947 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 05:07:04.909602   11947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 05:07:04.912291   11947 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:07:04.912593   11947 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-841000" does not appear in /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:07:04.912692   11947 kubeconfig.go:62] /Users/jenkins/minikube-integration/19166-9507/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-841000" cluster setting kubeconfig missing "stopped-upgrade-841000" context setting]
	I0701 05:07:04.912889   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.913328   11947 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105d4d090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:07:04.913676   11947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 05:07:04.916187   11947 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-841000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0701 05:07:04.916193   11947 kubeadm.go:1154] stopping kube-system containers ...
	I0701 05:07:04.916229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 05:07:04.926935   11947 docker.go:483] Stopping containers: [b42d072377e4 d5dd8fab773c 4290f4ea2713 6093aa79356b 4fa696cbe259 164948541ac9 6bb114ebadf6 61acb4180c04]
	I0701 05:07:04.927008   11947 ssh_runner.go:195] Run: docker stop b42d072377e4 d5dd8fab773c 4290f4ea2713 6093aa79356b 4fa696cbe259 164948541ac9 6bb114ebadf6 61acb4180c04
	I0701 05:07:04.937074   11947 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 05:07:04.942651   11947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:07:04.945567   11947 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 05:07:04.945572   11947 kubeadm.go:156] found existing configuration files:
	
	I0701 05:07:04.945597   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0701 05:07:04.947887   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 05:07:04.947905   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:07:04.950983   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0701 05:07:04.954059   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 05:07:04.954078   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:07:04.956696   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0701 05:07:04.959243   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 05:07:04.959266   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:07:04.962532   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0701 05:07:04.965165   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 05:07:04.965190   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:07:04.967590   11947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:07:04.970701   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:04.994726   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.591930   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.720534   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.746465   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.776673   11947 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:07:05.776756   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:06.278810   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:06.778784   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:06.783095   11947 api_server.go:72] duration metric: took 1.006427708s to wait for apiserver process to appear ...
	I0701 05:07:06.783105   11947 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:07:06.783120   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:03.568191   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:03.568204   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:03.581822   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:03.581839   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:03.622178   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:03.622193   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:03.626953   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:03.626963   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:03.641090   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:03.641102   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:03.677197   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:03.677208   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:03.701282   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:03.701296   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:03.718220   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:03.718234   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:03.733844   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:03.733855   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:03.749311   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:03.749322   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:03.767225   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:03.767239   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:03.781759   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:03.781771   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:06.309646   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:11.785236   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:11.785264   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:11.311810   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:11.311968   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:11.322759   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:11.322825   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:11.332904   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:11.332978   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:11.347302   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:11.347364   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:11.358406   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:11.358481   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:11.368631   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:11.368697   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:11.378835   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:11.378901   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:11.388940   11792 logs.go:276] 0 containers: []
	W0701 05:07:11.388952   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:11.389011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:11.399393   11792 logs.go:276] 0 containers: []
	W0701 05:07:11.399405   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:11.399412   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:11.399418   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:11.410573   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:11.410584   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:11.445859   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:11.445869   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:11.463815   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:11.463825   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:11.475990   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:11.476001   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:11.488338   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:11.488347   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:11.506317   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:11.506327   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:11.518201   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:11.518212   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:11.523169   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:11.523179   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:11.547299   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:11.547309   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:11.560764   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:11.560774   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:11.575114   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:11.575124   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:11.586655   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:11.586665   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:11.625373   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:11.625382   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:11.640366   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:11.640376   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:16.785545   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:16.785590   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:14.165592   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:21.786045   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:21.786094   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:19.167949   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:19.168169   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:19.189690   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:19.189795   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:19.204920   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:19.205002   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:19.218075   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:19.218178   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:19.229214   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:19.229281   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:19.241157   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:19.241228   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:19.251762   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:19.251821   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:19.262368   11792 logs.go:276] 0 containers: []
	W0701 05:07:19.262382   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:19.262449   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:19.275321   11792 logs.go:276] 0 containers: []
	W0701 05:07:19.275332   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:19.275341   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:19.275346   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:19.293356   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:19.293368   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:19.305311   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:19.305325   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:19.318884   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:19.318898   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:19.330243   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:19.330258   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:19.354839   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:19.354851   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:19.369055   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:19.369069   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:19.380925   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:19.380938   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:19.395696   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:19.395706   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:19.407147   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:19.407158   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:19.431231   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:19.431238   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:19.470236   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:19.470244   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:19.474766   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:19.474771   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:19.516014   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:19.516025   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:19.530676   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:19.530685   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:22.046257   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:26.786790   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:26.786856   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:27.046870   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:27.047118   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:27.075392   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:27.075513   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:27.093309   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:27.093389   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:27.106557   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:27.106630   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:27.119560   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:27.119635   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:27.130073   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:27.130145   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:27.141362   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:27.141434   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:27.151810   11792 logs.go:276] 0 containers: []
	W0701 05:07:27.151822   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:27.151878   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:27.162525   11792 logs.go:276] 0 containers: []
	W0701 05:07:27.162537   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:27.162546   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:27.162552   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:27.175790   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:27.175800   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:27.193368   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:27.193380   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:27.205457   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:27.205469   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:27.209702   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:27.209714   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:27.233870   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:27.233881   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:27.245112   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:27.245121   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:27.267998   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:27.268005   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:27.305554   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:27.305565   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:27.330666   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:27.330674   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:27.344878   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:27.344887   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:27.356640   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:27.356649   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:27.371145   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:27.371154   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:27.382705   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:27.382715   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:27.419756   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:27.419764   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:31.787888   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:31.787921   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:29.935869   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:36.788965   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:36.789025   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:34.938083   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:34.938245   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:07:34.950256   11792 logs.go:276] 2 containers: [404055752cb2 f3ec9d500953]
	I0701 05:07:34.950338   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:07:34.962481   11792 logs.go:276] 2 containers: [1cc68405ece0 9179c9dfd861]
	I0701 05:07:34.962548   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:07:34.973265   11792 logs.go:276] 1 containers: [70f71c17f4ab]
	I0701 05:07:34.973334   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:07:34.984333   11792 logs.go:276] 2 containers: [d5f87fc1e6cb f13cb6673393]
	I0701 05:07:34.984407   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:07:34.994549   11792 logs.go:276] 1 containers: [b82cfe0e02b4]
	I0701 05:07:34.994625   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:07:35.005276   11792 logs.go:276] 2 containers: [63ed7c2907af 59e69595559e]
	I0701 05:07:35.005348   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:07:35.019404   11792 logs.go:276] 0 containers: []
	W0701 05:07:35.019416   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:07:35.019477   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:07:35.030069   11792 logs.go:276] 0 containers: []
	W0701 05:07:35.030081   11792 logs.go:278] No container was found matching "storage-provisioner"
	I0701 05:07:35.030089   11792 logs.go:123] Gathering logs for kube-scheduler [d5f87fc1e6cb] ...
	I0701 05:07:35.030093   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f87fc1e6cb"
	I0701 05:07:35.041620   11792 logs.go:123] Gathering logs for kube-apiserver [f3ec9d500953] ...
	I0701 05:07:35.041630   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ec9d500953"
	I0701 05:07:35.065514   11792 logs.go:123] Gathering logs for kube-scheduler [f13cb6673393] ...
	I0701 05:07:35.065525   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f13cb6673393"
	I0701 05:07:35.081346   11792 logs.go:123] Gathering logs for kube-proxy [b82cfe0e02b4] ...
	I0701 05:07:35.081361   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b82cfe0e02b4"
	I0701 05:07:35.093196   11792 logs.go:123] Gathering logs for kube-controller-manager [63ed7c2907af] ...
	I0701 05:07:35.093206   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ed7c2907af"
	I0701 05:07:35.113936   11792 logs.go:123] Gathering logs for kube-controller-manager [59e69595559e] ...
	I0701 05:07:35.113947   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e69595559e"
	I0701 05:07:35.125523   11792 logs.go:123] Gathering logs for etcd [9179c9dfd861] ...
	I0701 05:07:35.125535   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9179c9dfd861"
	I0701 05:07:35.138900   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:07:35.138917   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:07:35.143414   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:07:35.143426   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:07:35.176652   11792 logs.go:123] Gathering logs for kube-apiserver [404055752cb2] ...
	I0701 05:07:35.176663   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404055752cb2"
	I0701 05:07:35.190725   11792 logs.go:123] Gathering logs for coredns [70f71c17f4ab] ...
	I0701 05:07:35.190736   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70f71c17f4ab"
	I0701 05:07:35.202096   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:07:35.202107   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:07:35.226669   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:07:35.226677   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:07:35.238274   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:07:35.238285   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:07:35.277685   11792 logs.go:123] Gathering logs for etcd [1cc68405ece0] ...
	I0701 05:07:35.277698   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cc68405ece0"
	I0701 05:07:37.793240   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:41.790578   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:41.790636   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:42.794908   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:42.795086   11792 kubeadm.go:591] duration metric: took 4m4.576529083s to restartPrimaryControlPlane
	W0701 05:07:42.795222   11792 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0701 05:07:42.795281   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0701 05:07:43.781266   11792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 05:07:43.786118   11792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:07:43.788893   11792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:07:43.791429   11792 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 05:07:43.791435   11792 kubeadm.go:156] found existing configuration files:
	
	I0701 05:07:43.791456   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/admin.conf
	I0701 05:07:43.793772   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 05:07:43.793792   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:07:43.796834   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/kubelet.conf
	I0701 05:07:43.799269   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 05:07:43.799288   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:07:43.801985   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/controller-manager.conf
	I0701 05:07:43.804794   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 05:07:43.804815   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:07:43.807723   11792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/scheduler.conf
	I0701 05:07:43.810299   11792 kubeadm.go:162] "https://control-plane.minikube.internal:52167" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52167 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 05:07:43.810325   11792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:07:43.813277   11792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0701 05:07:43.831531   11792 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0701 05:07:43.831579   11792 kubeadm.go:309] [preflight] Running pre-flight checks
	I0701 05:07:43.879261   11792 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 05:07:43.879312   11792 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 05:07:43.879352   11792 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 05:07:43.928682   11792 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 05:07:43.936867   11792 out.go:204]   - Generating certificates and keys ...
	I0701 05:07:43.936901   11792 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0701 05:07:43.936932   11792 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0701 05:07:43.936978   11792 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0701 05:07:43.937032   11792 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0701 05:07:43.937066   11792 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0701 05:07:43.937096   11792 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0701 05:07:43.937135   11792 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0701 05:07:43.937167   11792 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0701 05:07:43.937201   11792 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0701 05:07:43.937238   11792 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0701 05:07:43.937258   11792 kubeadm.go:309] [certs] Using the existing "sa" key
	I0701 05:07:43.937284   11792 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 05:07:44.022949   11792 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 05:07:44.122248   11792 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 05:07:44.252992   11792 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 05:07:44.291491   11792 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 05:07:44.321619   11792 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 05:07:44.321971   11792 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 05:07:44.322026   11792 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0701 05:07:44.413970   11792 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 05:07:46.792401   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:46.792424   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:44.418269   11792 out.go:204]   - Booting up control plane ...
	I0701 05:07:44.418314   11792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 05:07:44.418351   11792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 05:07:44.418386   11792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 05:07:44.418438   11792 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 05:07:44.418540   11792 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 05:07:48.918533   11792 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501637 seconds
	I0701 05:07:48.918595   11792 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 05:07:48.921984   11792 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 05:07:49.441682   11792 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 05:07:49.441914   11792 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-803000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 05:07:49.946207   11792 kubeadm.go:309] [bootstrap-token] Using token: 6zv076.ks0is4rdrwcaqafy
	I0701 05:07:49.952496   11792 out.go:204]   - Configuring RBAC rules ...
	I0701 05:07:49.952554   11792 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 05:07:49.952604   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 05:07:49.955293   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 05:07:49.960127   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 05:07:49.960994   11792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 05:07:49.961944   11792 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 05:07:49.964989   11792 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 05:07:50.135044   11792 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0701 05:07:50.351013   11792 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0701 05:07:50.351962   11792 kubeadm.go:309] 
	I0701 05:07:50.351998   11792 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0701 05:07:50.352002   11792 kubeadm.go:309] 
	I0701 05:07:50.352039   11792 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0701 05:07:50.352043   11792 kubeadm.go:309] 
	I0701 05:07:50.352063   11792 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0701 05:07:50.352109   11792 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 05:07:50.352141   11792 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 05:07:50.352146   11792 kubeadm.go:309] 
	I0701 05:07:50.352181   11792 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0701 05:07:50.352195   11792 kubeadm.go:309] 
	I0701 05:07:50.352224   11792 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 05:07:50.352227   11792 kubeadm.go:309] 
	I0701 05:07:50.352264   11792 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0701 05:07:50.352316   11792 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 05:07:50.352355   11792 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 05:07:50.352360   11792 kubeadm.go:309] 
	I0701 05:07:50.352413   11792 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 05:07:50.352458   11792 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0701 05:07:50.352460   11792 kubeadm.go:309] 
	I0701 05:07:50.352502   11792 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6zv076.ks0is4rdrwcaqafy \
	I0701 05:07:50.352568   11792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 \
	I0701 05:07:50.352582   11792 kubeadm.go:309] 	--control-plane 
	I0701 05:07:50.352585   11792 kubeadm.go:309] 
	I0701 05:07:50.352648   11792 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0701 05:07:50.352652   11792 kubeadm.go:309] 
	I0701 05:07:50.352701   11792 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6zv076.ks0is4rdrwcaqafy \
	I0701 05:07:50.352772   11792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 
	I0701 05:07:50.352849   11792 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 05:07:50.352865   11792 cni.go:84] Creating CNI manager for ""
	I0701 05:07:50.352873   11792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:07:50.356764   11792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 05:07:50.366763   11792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 05:07:50.369732   11792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0701 05:07:50.374635   11792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 05:07:50.374673   11792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 05:07:50.374701   11792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-803000 minikube.k8s.io/updated_at=2024_07_01T05_07_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c minikube.k8s.io/name=running-upgrade-803000 minikube.k8s.io/primary=true
	I0701 05:07:50.417983   11792 kubeadm.go:1107] duration metric: took 43.3425ms to wait for elevateKubeSystemPrivileges
	I0701 05:07:50.417988   11792 ops.go:34] apiserver oom_adj: -16
	W0701 05:07:50.418006   11792 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0701 05:07:50.418009   11792 kubeadm.go:393] duration metric: took 4m12.218980959s to StartCluster
	I0701 05:07:50.418018   11792 settings.go:142] acquiring lock: {Name:mk8a5112b51a742a29c931ccf59ae86bde00a5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:50.418186   11792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:07:50.418552   11792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:50.418770   11792 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:07:50.418814   11792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 05:07:50.418845   11792 config.go:182] Loaded profile config "running-upgrade-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:07:50.418847   11792 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-803000"
	I0701 05:07:50.418860   11792 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-803000"
	W0701 05:07:50.418864   11792 addons.go:243] addon storage-provisioner should already be in state true
	I0701 05:07:50.418868   11792 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-803000"
	I0701 05:07:50.418875   11792 host.go:66] Checking if "running-upgrade-803000" exists ...
	I0701 05:07:50.418883   11792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-803000"
	I0701 05:07:50.419752   11792 kapi.go:59] client config for running-upgrade-803000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/running-upgrade-803000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1057f9090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:07:50.419882   11792 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-803000"
	W0701 05:07:50.419886   11792 addons.go:243] addon default-storageclass should already be in state true
	I0701 05:07:50.419898   11792 host.go:66] Checking if "running-upgrade-803000" exists ...
	I0701 05:07:50.421791   11792 out.go:177] * Verifying Kubernetes components...
	I0701 05:07:50.422079   11792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 05:07:50.426230   11792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 05:07:50.426239   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:07:50.429826   11792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:51.794226   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:51.794269   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:50.433731   11792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:07:50.437839   11792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:07:50.437844   11792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 05:07:50.437849   11792 sshutil.go:53] new ssh client: &{IP:localhost Port:52135 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/running-upgrade-803000/id_rsa Username:docker}
	I0701 05:07:50.523021   11792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:07:50.528243   11792 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:07:50.528282   11792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:50.531977   11792 api_server.go:72] duration metric: took 113.197292ms to wait for apiserver process to appear ...
	I0701 05:07:50.531986   11792 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:07:50.531992   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:50.556902   11792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 05:07:50.575147   11792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:07:56.796081   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:56.796171   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:55.534195   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:55.534275   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:01.798625   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:01.798650   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:00.535016   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:00.535068   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:06.800274   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:06.800456   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:06.818301   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:06.818397   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:06.832457   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:06.832526   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:06.844037   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:06.844120   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:06.854940   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:06.854998   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:06.865273   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:06.865347   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:06.875497   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:06.875565   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:06.885225   11947 logs.go:276] 0 containers: []
	W0701 05:08:06.885239   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:06.885295   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:06.903731   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:06.903748   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:06.903755   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:06.930909   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:06.930922   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:06.946517   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:06.946528   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:06.958821   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:06.958831   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:06.975952   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:06.975964   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:06.992692   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:06.992702   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:07.006758   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:07.006769   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:07.018579   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:07.018590   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:07.030013   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:07.030023   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:07.054872   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:07.054881   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:07.090985   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:07.090995   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:07.095108   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:07.095114   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:07.112922   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:07.112932   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:05.535724   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:05.535794   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:07.128897   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:07.128907   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:07.144463   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:07.144474   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:07.254423   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:07.254434   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:09.782963   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:10.536855   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:10.536915   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:14.784924   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:14.785107   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:14.797952   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:14.798025   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:14.808443   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:14.808511   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:14.819155   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:14.819229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:14.829597   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:14.829663   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:14.840008   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:14.840086   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:14.850734   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:14.850802   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:14.861378   11947 logs.go:276] 0 containers: []
	W0701 05:08:14.861389   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:14.861444   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:14.872183   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:14.872203   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:14.872208   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:14.876804   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:14.876813   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:14.891265   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:14.891275   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:14.905022   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:14.905031   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:14.922284   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:14.922294   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:14.946998   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:14.947006   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:14.958027   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:14.958037   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:14.969790   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:14.969802   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:15.007384   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:15.007394   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:15.032467   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:15.032476   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:15.050534   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:15.050543   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:15.069408   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:15.069421   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:15.084358   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:15.084367   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:15.121296   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:15.121306   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:15.136011   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:15.136025   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:15.147884   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:15.147898   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:15.538418   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:15.538466   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:20.540372   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:20.540401   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0701 05:08:20.938530   11792 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0701 05:08:20.942798   11792 out.go:177] * Enabled addons: storage-provisioner
	I0701 05:08:17.662083   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:20.954753   11792 addons.go:510] duration metric: took 30.536080041s for enable addons: enabled=[storage-provisioner]
	I0701 05:08:22.664384   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:22.664797   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:22.702538   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:22.702673   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:22.723065   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:22.723181   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:22.738357   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:22.738436   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:22.752851   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:22.752921   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:22.763364   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:22.763434   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:22.773771   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:22.773850   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:22.784104   11947 logs.go:276] 0 containers: []
	W0701 05:08:22.784115   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:22.784172   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:22.796759   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:22.796777   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:22.796783   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:22.811104   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:22.811114   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:22.822480   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:22.822492   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:22.834795   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:22.834807   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:22.846305   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:22.846317   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:22.850356   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:22.850366   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:22.874763   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:22.874774   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:22.893774   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:22.893784   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:22.904635   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:22.904645   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:22.922872   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:22.922883   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:22.934928   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:22.934940   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:22.972909   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:22.972917   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:23.007556   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:23.007568   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:23.021901   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:23.021913   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:23.046772   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:23.046779   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:23.061350   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:23.061360   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:25.581692   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:25.542323   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:25.542406   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:30.584116   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:30.584201   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:30.595609   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:30.595696   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:30.605990   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:30.606057   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:30.616742   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:30.616811   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:30.627351   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:30.627425   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:30.637111   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:30.637180   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:30.647755   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:30.647820   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:30.658046   11947 logs.go:276] 0 containers: []
	W0701 05:08:30.658058   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:30.658117   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:30.668284   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:30.668302   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:30.668308   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:30.679907   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:30.679920   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:30.697276   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:30.697286   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:30.711902   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:30.711910   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:30.736096   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:30.736106   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:30.753118   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:30.753128   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:30.777645   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:30.777652   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:30.813059   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:30.813074   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:30.827168   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:30.827182   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:30.838767   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:30.838780   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:30.853077   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:30.853090   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:30.864317   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:30.864329   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:30.878011   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:30.878024   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:30.891770   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:30.891783   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:30.904032   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:30.904042   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:30.940659   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:30.940667   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:30.544980   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:30.545011   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:33.446584   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:35.547328   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:35.547410   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:38.448878   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:38.448981   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:38.459885   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:38.459955   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:38.470882   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:38.470953   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:38.481394   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:38.481460   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:38.491879   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:38.491951   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:38.502278   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:38.502354   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:38.513035   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:38.513103   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:38.523501   11947 logs.go:276] 0 containers: []
	W0701 05:08:38.523516   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:38.523572   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:38.533945   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:38.533964   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:38.533969   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:38.552225   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:38.552237   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:38.564004   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:38.564014   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:38.578232   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:38.578244   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:38.583183   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:38.583190   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:38.618596   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:38.618610   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:38.632879   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:38.632890   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:38.657981   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:38.657991   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:38.669724   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:38.669733   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:38.708132   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:38.708146   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:38.719910   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:38.719921   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:38.731550   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:38.731561   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:38.755223   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:38.755230   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:38.766919   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:38.766929   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:38.780953   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:38.780963   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:38.797384   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:38.797398   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:41.317047   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:40.550021   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:40.550044   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:46.319391   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:46.319634   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:46.346856   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:46.346975   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:46.361782   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:46.361858   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:46.373607   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:46.373682   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:46.384407   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:46.384478   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:46.396821   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:46.396886   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:46.407848   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:46.407924   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:46.418050   11947 logs.go:276] 0 containers: []
	W0701 05:08:46.418063   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:46.418130   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:46.428553   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:46.428572   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:46.428578   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:46.432585   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:46.432591   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:46.467547   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:46.467561   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:46.494049   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:46.494060   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:46.507977   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:46.507988   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:46.522364   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:46.522374   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:46.560162   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:46.560174   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:46.578366   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:46.578376   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:46.602563   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:46.602574   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:46.617003   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:46.617014   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:46.641369   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:46.641378   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:46.656789   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:46.656805   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:46.669318   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:46.669328   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:46.680724   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:46.680733   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:46.691959   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:46.691970   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:46.703885   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:46.703897   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:45.550928   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:45.550977   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:49.223295   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:50.553268   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:50.553363   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:50.563887   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:08:50.563955   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:50.574175   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:08:50.574246   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:50.584576   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:08:50.584642   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:50.595156   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:08:50.595232   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:50.605276   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:08:50.605342   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:50.615657   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:08:50.615750   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:50.625957   11792 logs.go:276] 0 containers: []
	W0701 05:08:50.625968   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:50.626023   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:50.638124   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:08:50.638140   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:08:50.638146   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:08:50.650202   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:08:50.650212   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:08:50.667945   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:08:50.667955   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:08:50.679839   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:08:50.679855   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:50.691273   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:50.691284   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:50.727545   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:50.727553   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:50.732226   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:08:50.732233   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:08:50.746406   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:08:50.746416   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:08:50.758181   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:08:50.758191   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:08:50.772680   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:50.772697   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:50.796070   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:50.796079   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:50.834712   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:08:50.834725   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:08:50.848464   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:08:50.848474   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:08:53.361858   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:54.225697   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:54.225925   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:54.248177   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:54.248279   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:54.263963   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:54.264037   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:54.278729   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:54.278805   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:54.289089   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:54.289162   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:54.299197   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:54.299259   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:54.324160   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:54.324234   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:54.336778   11947 logs.go:276] 0 containers: []
	W0701 05:08:54.336788   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:54.336844   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:54.347357   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:54.347375   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:54.347381   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:54.368543   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:54.368553   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:54.380309   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:54.380322   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:54.394556   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:54.394570   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:54.406498   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:54.406512   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:54.418501   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:54.418512   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:54.433738   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:54.433749   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:54.448323   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:54.448333   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:54.462905   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:54.462916   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:54.499235   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:54.499243   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:54.513000   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:54.513012   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:54.537610   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:54.537621   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:54.549978   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:54.549991   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:54.575443   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:54.575454   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:54.587419   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:54.587429   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:54.591659   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:54.591665   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:58.364203   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:58.364352   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:58.380364   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:08:58.380466   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:58.393334   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:08:58.393407   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:58.404433   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:08:58.404493   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:58.414938   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:08:58.415006   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:58.425491   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:08:58.425554   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:58.436003   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:08:58.436060   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:58.446303   11792 logs.go:276] 0 containers: []
	W0701 05:08:58.446315   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:58.446366   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:58.456576   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:08:58.456592   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:58.456598   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:58.461007   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:08:58.461016   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:08:58.474752   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:08:58.474763   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:08:58.488286   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:08:58.488295   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:08:58.499334   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:08:58.499343   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:08:58.513866   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:58.513878   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:58.537111   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:58.537119   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:57.128718   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:02.130130   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:58.570648   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:08:58.570657   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:08:58.581750   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:08:58.581761   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:08:58.594723   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:08:58.594737   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:08:58.612464   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:08:58.612475   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:08:58.623423   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:08:58.623433   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:58.634745   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:58.634756   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:01.172267   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:02.130345   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:02.156153   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:02.156249   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:02.175573   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:02.175657   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:02.188429   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:02.188510   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:02.199652   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:02.199738   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:02.210851   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:02.210920   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:02.222004   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:02.222076   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:02.232535   11947 logs.go:276] 0 containers: []
	W0701 05:09:02.232551   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:02.232609   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:02.243111   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:02.243133   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:02.243137   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:02.256862   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:02.256872   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:02.272246   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:02.272256   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:02.284525   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:02.284539   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:02.298084   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:02.298097   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:02.333798   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:02.333809   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:02.348845   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:02.348858   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:02.364321   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:02.364334   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:02.382942   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:02.382954   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:02.395261   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:02.395274   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:02.431378   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:02.431386   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:02.444929   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:02.444939   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:02.467754   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:02.467763   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:02.478815   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:02.478827   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:02.482985   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:02.482992   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:02.508252   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:02.508261   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:05.033003   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:06.174052   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:06.174247   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:06.189775   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:06.189860   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:06.202889   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:06.202969   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:06.215909   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:06.215991   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:06.226410   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:06.226482   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:06.236990   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:06.237056   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:06.247180   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:06.247238   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:06.257462   11792 logs.go:276] 0 containers: []
	W0701 05:09:06.257475   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:06.257525   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:06.268022   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:06.268041   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:06.268046   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:06.304024   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:06.304040   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:06.308905   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:06.308910   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:06.326019   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:06.326030   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:06.337350   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:06.337360   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:06.360100   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:06.360108   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:06.371692   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:06.371702   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:06.383806   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:06.383816   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:06.421203   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:06.421214   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:06.435156   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:06.435169   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:06.448824   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:06.448835   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:06.460002   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:06.460014   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:06.472114   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:06.472126   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:10.033817   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:10.034128   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:10.068466   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:10.068634   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:10.089835   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:10.089927   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:10.103862   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:10.103937   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:10.115843   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:10.115917   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:10.127229   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:10.127299   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:10.139239   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:10.139304   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:10.157195   11947 logs.go:276] 0 containers: []
	W0701 05:09:10.157209   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:10.157267   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:10.167561   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:10.167578   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:10.167583   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:10.182161   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:10.182170   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:10.205909   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:10.205922   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:10.217937   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:10.217948   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:10.229776   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:10.229787   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:10.243539   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:10.243550   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:10.258850   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:10.258866   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:10.272634   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:10.272645   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:10.309688   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:10.309707   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:10.346453   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:10.346469   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:10.361634   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:10.361644   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:10.373190   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:10.373200   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:10.395622   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:10.395630   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:10.406884   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:10.406897   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:10.411396   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:10.411403   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:10.428518   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:10.428532   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:08.989068   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:12.942850   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:13.991553   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:13.991806   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:14.015629   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:14.015732   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:14.032348   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:14.032422   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:14.046292   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:14.046378   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:14.058483   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:14.058550   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:14.068999   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:14.069060   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:14.079119   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:14.079176   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:14.089240   11792 logs.go:276] 0 containers: []
	W0701 05:09:14.089251   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:14.089305   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:14.099779   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:14.099795   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:14.099805   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:14.111364   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:14.111375   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:14.125623   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:14.125633   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:14.137361   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:14.137375   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:14.158175   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:14.158187   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:14.169638   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:14.169650   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:14.193975   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:14.193987   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:14.234067   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:14.234079   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:14.239162   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:14.239169   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:14.253781   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:14.253790   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:14.267465   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:14.267477   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:14.279377   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:14.279388   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:14.290742   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:14.290754   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:16.826539   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:17.945570   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:17.945702   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:17.957726   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:17.957797   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:17.968366   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:17.968451   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:17.978796   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:17.978857   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:17.989750   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:17.989817   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:18.000207   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:18.000264   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:18.010925   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:18.010997   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:18.021264   11947 logs.go:276] 0 containers: []
	W0701 05:09:18.021275   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:18.021334   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:18.031402   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:18.031416   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:18.031421   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:18.048296   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:18.048306   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:18.073640   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:18.073650   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:18.084985   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:18.085001   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:18.099770   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:18.099783   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:18.118539   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:18.118553   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:18.123229   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:18.123235   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:18.140566   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:18.140579   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:18.154049   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:18.154059   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:18.168371   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:18.168385   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:18.201202   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:18.201213   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:18.217794   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:18.217806   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:18.229839   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:18.229854   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:18.266958   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:18.266970   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:18.291247   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:18.291259   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:18.302613   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:18.302627   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:20.829771   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:21.828835   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:21.829011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:21.849147   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:21.849238   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:21.862095   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:21.862159   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:21.873256   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:21.873325   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:21.883744   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:21.883811   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:21.893995   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:21.894064   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:21.906000   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:21.906068   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:21.924132   11792 logs.go:276] 0 containers: []
	W0701 05:09:21.924144   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:21.924203   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:21.937831   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:21.937851   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:21.937857   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:21.972829   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:21.972845   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:22.010526   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:22.010536   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:22.025765   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:22.025775   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:22.039828   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:22.039839   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:22.051676   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:22.051688   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:22.066407   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:22.066418   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:22.078009   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:22.078018   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:22.082702   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:22.082710   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:22.094356   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:22.094369   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:22.105979   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:22.105992   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:22.122991   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:22.123003   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:22.146129   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:22.146137   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:25.832096   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:25.832257   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:25.844635   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:25.844709   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:25.862251   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:25.862322   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:25.872348   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:25.872413   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:25.882765   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:25.882836   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:25.893095   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:25.893163   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:25.903807   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:25.903878   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:25.917562   11947 logs.go:276] 0 containers: []
	W0701 05:09:25.917572   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:25.917626   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:25.927986   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:25.928012   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:25.928017   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:25.939298   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:25.939309   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:25.950969   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:25.950979   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:25.967125   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:25.967136   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:25.980827   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:25.980836   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:26.004812   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:26.004823   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:26.016343   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:26.016354   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:26.020441   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:26.020450   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:26.045184   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:26.045194   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:26.058844   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:26.058854   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:26.076319   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:26.076329   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:26.091153   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:26.091163   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:26.109414   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:26.109427   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:26.126117   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:26.126127   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:26.162660   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:26.162668   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:26.198116   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:26.198127   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:24.659547   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:28.711802   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:29.661944   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:29.662122   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:29.680677   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:29.680773   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:29.694859   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:29.694936   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:29.706738   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:29.706806   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:29.717319   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:29.717384   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:29.728028   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:29.728096   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:29.738846   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:29.738913   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:29.750109   11792 logs.go:276] 0 containers: []
	W0701 05:09:29.750121   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:29.750175   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:29.760884   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:29.760903   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:29.760908   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:29.773394   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:29.773406   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:29.807991   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:29.808003   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:29.812433   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:29.812442   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:29.847679   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:29.847693   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:29.862026   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:29.862039   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:29.875702   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:29.875713   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:29.887954   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:29.887968   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:29.899390   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:29.899402   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:29.914939   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:29.914952   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:29.929635   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:29.929646   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:29.949242   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:29.949254   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:29.967211   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:29.967221   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:32.494003   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:33.714095   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:33.714330   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:33.740632   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:33.740759   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:33.758576   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:33.758670   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:33.771959   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:33.772032   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:33.785074   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:33.785150   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:33.796417   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:33.796487   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:33.807374   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:33.807441   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:33.817663   11947 logs.go:276] 0 containers: []
	W0701 05:09:33.817675   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:33.817744   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:33.828774   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:33.828791   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:33.828797   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:33.865779   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:33.865788   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:33.879714   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:33.879724   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:33.895077   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:33.895086   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:33.906759   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:33.906771   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:33.917889   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:33.917900   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:33.929295   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:33.929305   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:33.951770   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:33.951777   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:33.963034   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:33.963044   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:33.977861   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:33.977870   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:33.999357   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:33.999367   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:34.013698   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:34.013709   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:34.018121   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:34.018128   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:34.053676   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:34.053687   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:34.068347   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:34.068357   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:34.094611   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:34.094622   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:36.612317   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:37.496739   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:37.497138   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:37.535196   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:37.535325   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:37.559453   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:37.559545   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:37.574625   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:37.574700   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:37.586725   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:37.586792   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:37.597834   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:37.597917   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:37.609809   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:37.609881   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:37.620737   11792 logs.go:276] 0 containers: []
	W0701 05:09:37.620748   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:37.620807   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:37.632062   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:37.632078   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:37.632083   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:37.647012   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:37.647026   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:37.659356   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:37.659368   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:37.675039   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:37.675049   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:37.687999   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:37.688009   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:37.705903   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:37.705913   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:37.731245   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:37.731253   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:37.735646   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:37.735653   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:37.750950   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:37.750960   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:37.763372   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:37.763382   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:37.775957   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:37.775969   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:37.787978   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:37.787992   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:37.823329   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:37.823341   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:41.614690   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:41.615036   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:41.644617   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:41.644746   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:41.672187   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:41.672261   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:41.684519   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:41.684591   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:41.695625   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:41.695708   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:41.706528   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:41.706588   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:41.717122   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:41.717182   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:41.727439   11947 logs.go:276] 0 containers: []
	W0701 05:09:41.727453   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:41.727507   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:41.738229   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:41.738246   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:41.738251   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:41.763424   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:41.763435   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:41.775556   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:41.775568   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:41.787209   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:41.787220   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:41.823970   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:41.823981   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:41.840806   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:41.840819   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:41.852050   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:41.852062   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:41.865424   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:41.865439   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:41.879252   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:41.879264   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:41.891382   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:41.891392   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:41.908971   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:41.908982   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:41.913087   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:41.913094   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:41.926937   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:41.926947   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:41.942203   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:41.942212   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:41.956104   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:41.956112   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:41.978528   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:41.978537   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:40.362177   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:44.515312   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:45.363693   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:45.363885   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:45.380563   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:45.380644   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:45.393358   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:45.393423   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:45.404584   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:45.404655   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:45.415540   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:45.415609   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:45.426508   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:45.426579   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:45.437750   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:45.437814   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:45.448477   11792 logs.go:276] 0 containers: []
	W0701 05:09:45.448491   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:45.448551   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:45.459734   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:45.459751   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:45.459756   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:45.471785   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:45.471795   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:45.496458   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:45.496465   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:45.508847   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:45.508859   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:45.558964   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:45.558976   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:45.571756   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:45.571769   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:45.583802   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:45.583813   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:45.599076   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:45.599089   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:45.617137   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:45.617147   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:45.654063   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:45.654081   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:45.658910   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:45.658920   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:45.673850   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:45.673862   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:45.688006   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:45.688016   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:48.201183   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:49.517732   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:49.518105   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:49.559095   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:49.559229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:49.580894   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:49.580978   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:49.595787   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:49.595865   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:49.609196   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:49.609266   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:49.620591   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:49.620656   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:49.631496   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:49.631568   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:49.642020   11947 logs.go:276] 0 containers: []
	W0701 05:09:49.642035   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:49.642100   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:49.654937   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:49.654953   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:49.654959   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:49.667299   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:49.667310   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:49.679309   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:49.679318   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:49.691099   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:49.691112   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:49.706438   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:49.706449   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:49.721244   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:49.721253   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:49.735036   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:49.735047   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:49.752689   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:49.752699   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:49.766405   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:49.766415   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:49.790982   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:49.790992   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:49.802779   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:49.802791   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:49.806931   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:49.806938   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:49.841340   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:49.841351   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:49.853184   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:49.853197   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:49.868513   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:49.868523   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:49.907205   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:49.907218   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:53.201579   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:53.201752   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:53.224887   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:09:53.224969   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:53.236740   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:09:53.236809   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:53.247895   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:09:53.247969   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:53.259275   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:09:53.259339   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:53.270252   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:09:53.270321   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:53.281930   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:09:53.282001   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:53.292771   11792 logs.go:276] 0 containers: []
	W0701 05:09:53.292780   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:53.292834   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:53.304135   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:09:53.304149   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:53.304155   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:53.341224   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:09:53.341234   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:09:53.353545   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:09:53.353555   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:09:53.369591   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:53.369601   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:53.393037   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:53.393046   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:53.398004   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:09:53.398010   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:09:53.412749   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:09:53.412759   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:09:53.427148   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:09:53.427158   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:09:53.439593   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:09:53.439603   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:09:53.455143   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:09:53.455157   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:09:53.473032   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:09:53.473042   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:09:53.485641   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:09:53.485652   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:53.497435   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:53.497444   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:52.436907   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:56.032180   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:57.439347   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:57.439674   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:57.475083   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:57.475230   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:57.497335   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:57.497430   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:57.517997   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:57.518071   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:57.529562   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:57.529636   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:57.541970   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:57.542033   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:57.553461   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:57.553530   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:57.563959   11947 logs.go:276] 0 containers: []
	W0701 05:09:57.563975   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:57.564034   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:57.574525   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:57.574542   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:57.574548   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:57.579341   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:57.579348   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:57.604304   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:57.604314   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:57.628441   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:57.628449   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:57.639723   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:57.639732   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:57.677245   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:57.677259   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:57.691259   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:57.691269   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:57.706468   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:57.706480   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:57.718732   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:57.718743   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:57.732503   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:57.732515   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:57.743965   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:57.743974   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:57.778767   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:57.778778   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:57.794891   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:57.794901   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:57.809787   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:57.809801   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:57.821327   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:57.821337   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:57.833728   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:57.833737   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:00.351831   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:01.033546   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:01.033747   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:01.046927   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:01.047002   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:01.058053   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:01.058126   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:01.068455   11792 logs.go:276] 2 containers: [83d49c28d07e 23db66bd25e4]
	I0701 05:10:01.068522   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:01.080385   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:01.080453   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:01.094978   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:01.095049   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:01.106188   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:01.106255   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:01.117131   11792 logs.go:276] 0 containers: []
	W0701 05:10:01.117143   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:01.117203   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:01.128012   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:01.128028   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:01.128033   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:01.139992   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:01.140001   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:01.177869   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:01.177882   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:01.191791   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:01.191804   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:01.204158   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:01.204168   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:01.215852   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:01.215863   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:01.231544   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:01.231557   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:01.253111   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:01.253120   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:01.286300   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:01.286309   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:01.290461   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:01.290469   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:01.305188   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:01.305198   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:01.317263   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:01.317273   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:01.342239   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:01.342248   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:05.354145   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:05.354345   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:05.370718   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:05.370806   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:05.384884   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:05.384959   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:05.396344   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:05.396408   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:05.407297   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:05.407363   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:05.418007   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:05.418069   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:05.428594   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:05.428663   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:05.438646   11947 logs.go:276] 0 containers: []
	W0701 05:10:05.438656   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:05.438709   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:05.449304   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:05.449319   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:05.449324   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:05.472877   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:05.472883   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:05.484953   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:05.484964   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:05.519816   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:05.519827   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:05.533929   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:05.533939   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:05.548916   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:05.548927   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:05.563668   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:05.563680   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:05.568015   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:05.568023   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:05.582682   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:05.582693   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:05.594954   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:05.594965   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:05.610657   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:05.610667   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:05.623071   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:05.623082   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:05.641812   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:05.641823   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:05.680226   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:05.680243   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:05.705855   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:05.705867   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:05.718042   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:05.718052   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:03.855358   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:08.234195   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:08.857577   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:08.857844   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:08.893077   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:08.893173   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:08.911412   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:08.911494   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:08.924412   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:08.924489   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:08.935318   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:08.935385   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:08.946504   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:08.946573   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:08.957535   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:08.957601   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:08.968410   11792 logs.go:276] 0 containers: []
	W0701 05:10:08.968420   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:08.968470   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:08.979026   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:08.979045   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:08.979051   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:08.993124   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:08.993136   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:09.004534   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:09.004546   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:09.038302   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:09.038311   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:09.042753   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:09.042760   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:09.064425   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:09.064434   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:09.079732   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:09.079746   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:09.091502   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:09.091516   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:09.116052   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:09.116059   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:09.151159   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:09.151169   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:09.163451   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:09.163465   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:09.181142   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:09.181151   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:09.192191   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:09.192201   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:09.203343   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:09.203354   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:09.214529   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:09.214543   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:11.727964   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:13.235051   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:13.235245   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:13.258877   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:13.258969   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:13.273825   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:13.273897   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:13.286314   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:13.286378   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:13.298004   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:13.298074   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:13.309487   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:13.309553   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:13.324108   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:13.324170   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:13.334548   11947 logs.go:276] 0 containers: []
	W0701 05:10:13.334563   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:13.334622   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:13.344858   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:13.344876   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:13.344882   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:13.356244   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:13.356256   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:13.373586   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:13.373596   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:13.397599   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:13.397608   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:13.434897   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:13.434905   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:13.449166   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:13.449177   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:13.463209   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:13.463219   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:13.467836   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:13.467844   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:13.483720   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:13.483730   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:13.495996   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:13.496008   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:13.511733   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:13.511747   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:13.526046   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:13.526056   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:13.537134   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:13.537143   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:13.548819   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:13.548830   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:13.584290   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:13.584302   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:13.609865   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:13.609878   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:16.121980   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:16.730319   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:16.730470   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:16.745107   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:16.745187   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:16.756388   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:16.756452   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:16.766982   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:16.767053   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:16.783820   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:16.783887   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:16.794657   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:16.794722   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:16.805406   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:16.805470   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:16.816045   11792 logs.go:276] 0 containers: []
	W0701 05:10:16.816059   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:16.816111   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:16.826124   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:16.826141   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:16.826146   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:16.846538   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:16.846560   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:16.873290   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:16.873305   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:16.908460   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:16.908471   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:16.923097   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:16.923106   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:16.934758   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:16.934767   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:16.946357   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:16.946366   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:16.957467   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:16.957477   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:16.971552   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:16.971561   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:16.983173   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:16.983182   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:17.016174   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:17.016184   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:17.051019   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:17.051043   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:17.056794   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:17.056805   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:17.068364   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:17.068375   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:17.080205   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:17.080217   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:21.124207   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:21.124395   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:21.139125   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:21.139202   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:21.150380   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:21.150450   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:21.161200   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:21.161267   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:21.171220   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:21.171291   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:21.181663   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:21.181730   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:21.192319   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:21.192385   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:21.204979   11947 logs.go:276] 0 containers: []
	W0701 05:10:21.204989   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:21.205043   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:21.215741   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:21.215760   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:21.215765   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:21.219965   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:21.219973   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:21.240502   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:21.240513   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:21.253859   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:21.253874   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:21.277508   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:21.277518   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:21.315004   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:21.315013   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:21.350768   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:21.350779   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:21.365293   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:21.365303   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:21.380233   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:21.380245   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:21.400876   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:21.400885   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:21.413726   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:21.413735   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:21.442710   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:21.442719   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:21.454635   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:21.454647   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:21.466296   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:21.466307   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:21.477806   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:21.477820   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:21.492144   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:21.492155   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:19.594446   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:24.007882   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:24.596834   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:24.596965   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:24.609757   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:24.609834   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:24.622832   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:24.622902   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:24.633573   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:24.633641   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:24.644379   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:24.644444   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:24.654751   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:24.654822   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:24.674775   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:24.674846   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:24.685595   11792 logs.go:276] 0 containers: []
	W0701 05:10:24.685613   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:24.685665   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:24.696327   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:24.696345   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:24.696353   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:24.708065   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:24.708075   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:24.719965   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:24.719975   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:24.735115   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:24.735127   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:24.746736   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:24.746746   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:24.771306   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:24.771315   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:24.782962   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:24.782975   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:24.818643   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:24.818654   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:24.830205   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:24.830216   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:24.842087   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:24.842096   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:24.856053   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:24.856063   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:24.867862   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:24.867872   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:24.884810   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:24.884819   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:24.902120   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:24.902130   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:24.935414   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:24.935422   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:27.441982   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:29.009216   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:29.009342   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:29.021822   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:29.021891   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:29.033021   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:29.033093   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:29.046313   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:29.046386   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:29.057170   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:29.057236   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:29.067820   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:29.067887   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:29.078398   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:29.078462   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:29.088667   11947 logs.go:276] 0 containers: []
	W0701 05:10:29.088679   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:29.088737   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:29.099875   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:29.099894   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:29.099899   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:29.138042   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:29.138059   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:29.153062   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:29.153073   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:29.164946   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:29.164956   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:29.186221   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:29.186232   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:29.199993   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:29.200002   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:29.211362   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:29.211372   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:29.215619   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:29.215626   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:29.238342   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:29.238351   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:29.252175   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:29.252185   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:29.276335   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:29.276350   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:29.287575   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:29.287587   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:29.311558   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:29.311568   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:29.345750   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:29.345761   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:29.361679   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:29.361695   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:29.373098   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:29.373108   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:31.889083   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:32.444348   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:32.444579   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:32.466845   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:32.466946   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:32.482688   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:32.482773   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:32.495849   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:32.495923   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:32.508690   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:32.508748   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:32.519025   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:32.519097   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:32.529505   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:32.529570   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:32.539869   11792 logs.go:276] 0 containers: []
	W0701 05:10:32.539881   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:32.539936   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:32.552257   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:32.552275   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:32.552280   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:32.566703   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:32.566713   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:32.578118   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:32.578128   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:32.589420   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:32.589430   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:32.603801   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:32.603814   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:32.616070   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:32.616080   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:32.620909   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:32.620917   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:32.635320   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:32.635331   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:32.652653   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:32.652663   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:32.665079   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:32.665091   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:32.705393   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:32.705404   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:32.718582   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:32.718591   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:32.730487   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:32.730497   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:32.742082   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:32.742091   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:32.766749   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:32.766758   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:36.891473   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:36.891677   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:36.911913   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:36.912008   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:36.927432   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:36.927512   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:36.939276   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:36.939345   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:36.950173   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:36.950246   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:36.960606   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:36.960673   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:36.974549   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:36.974616   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:36.985293   11947 logs.go:276] 0 containers: []
	W0701 05:10:36.985304   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:36.985365   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:36.995477   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:36.995493   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:36.995499   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:36.999776   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:36.999783   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:37.013711   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:37.013721   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:37.028558   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:37.028569   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:37.051317   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:37.051328   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:37.062693   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:37.062704   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:37.097408   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:37.097418   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:37.111505   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:37.111519   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:35.302561   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:37.135857   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:37.135868   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:37.156818   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:37.156830   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:37.174290   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:37.174300   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:37.213565   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:37.213577   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:37.228468   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:37.228482   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:37.242118   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:37.242128   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:37.257428   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:37.257438   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:37.268383   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:37.268393   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:39.780534   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:40.305122   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:40.305340   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:40.323301   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:40.323385   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:40.336504   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:40.336571   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:40.348003   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:40.348076   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:40.358188   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:40.358260   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:40.368740   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:40.368808   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:40.379197   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:40.379263   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:40.388968   11792 logs.go:276] 0 containers: []
	W0701 05:10:40.388985   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:40.389036   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:40.403611   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:40.403627   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:40.403633   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:40.437663   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:40.437677   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:40.455096   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:40.455107   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:40.466888   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:40.466900   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:40.478218   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:40.478229   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:40.503495   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:40.503505   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:40.508162   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:40.508173   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:40.520188   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:40.520201   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:40.531687   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:40.531697   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:40.543595   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:40.543605   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:40.556476   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:40.556489   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:40.568113   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:40.568123   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:40.604056   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:40.604066   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:40.617891   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:40.617900   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:40.633277   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:40.633285   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:43.153493   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:44.782553   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:44.782711   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:44.799181   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:44.799259   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:44.810281   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:44.810341   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:44.820727   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:44.820799   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:44.830800   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:44.830870   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:44.841342   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:44.841413   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:44.851398   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:44.851463   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:44.862040   11947 logs.go:276] 0 containers: []
	W0701 05:10:44.862051   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:44.862103   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:44.872275   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:44.872292   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:44.872298   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:44.888532   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:44.888545   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:44.902022   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:44.902034   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:44.913836   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:44.913850   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:44.918275   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:44.918283   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:44.952895   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:44.952904   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:44.965144   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:44.965159   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:44.986758   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:44.986766   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:45.022377   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:45.022387   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:45.039603   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:45.039616   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:45.051307   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:45.051318   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:45.065922   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:45.065934   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:45.077415   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:45.077428   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:45.096450   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:45.096460   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:45.121942   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:45.121954   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:45.136596   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:45.136606   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:48.155898   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:48.156243   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:48.192340   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:48.192469   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:48.217618   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:48.217701   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:48.244300   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:48.244371   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:48.255525   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:48.255596   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:48.271022   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:48.271089   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:48.282268   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:48.282344   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:48.295893   11792 logs.go:276] 0 containers: []
	W0701 05:10:48.295905   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:48.295966   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:48.307196   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:48.307214   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:48.307219   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:48.343269   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:48.343278   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:48.357975   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:48.357984   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:48.369536   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:48.369549   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:48.383823   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:48.383834   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:48.414314   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:48.414325   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:48.425676   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:48.425686   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:48.438987   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:48.438998   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:48.475692   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:48.475702   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:48.492817   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:48.492827   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:48.514379   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:48.514391   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:10:48.527833   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:48.527843   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:48.532857   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:48.532865   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:48.552284   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:48.552294   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:47.653074   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:48.564504   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:48.564514   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:51.089810   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:52.655490   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:52.655816   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:52.696076   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:52.696206   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:52.714729   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:52.714827   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:52.732074   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:52.732165   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:52.744744   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:52.744812   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:52.755348   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:52.755417   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:52.765815   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:52.765880   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:52.776215   11947 logs.go:276] 0 containers: []
	W0701 05:10:52.776226   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:52.776276   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:52.787392   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:52.787411   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:52.787417   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:52.806775   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:52.806791   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:52.821130   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:52.821143   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:52.846087   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:52.846098   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:52.860779   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:52.860788   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:52.874954   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:52.874965   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:52.887526   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:52.887538   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:52.921095   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:52.921109   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:52.937010   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:52.937021   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:52.951841   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:52.951851   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:52.990739   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:52.990747   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:52.995111   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:52.995118   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:53.010289   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:53.010300   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:53.022360   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:53.022372   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:53.037568   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:53.037580   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:53.049873   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:53.049885   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:55.575684   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:56.092370   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:56.092594   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:56.111299   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:10:56.111400   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:56.125508   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:10:56.125578   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:56.140652   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:10:56.140718   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:56.151292   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:10:56.151354   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:56.161908   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:10:56.161975   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:56.172772   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:10:56.172836   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:56.186819   11792 logs.go:276] 0 containers: []
	W0701 05:10:56.186829   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:56.186877   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:56.198088   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:10:56.198106   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:10:56.198111   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:10:56.214605   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:56.214615   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:56.219536   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:10:56.219545   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:10:56.234036   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:10:56.234045   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:10:56.245640   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:10:56.245651   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:10:56.261978   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:56.261987   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:56.286762   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:10:56.286769   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:56.298471   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:56.298481   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:56.333760   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:10:56.333774   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:10:56.345468   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:10:56.345479   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:10:56.357631   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:56.357647   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:56.392868   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:10:56.392875   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:10:56.405856   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:10:56.405870   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:10:56.423500   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:10:56.423508   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:10:56.441025   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:10:56.441036   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:00.577861   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:00.578008   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:00.594488   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:11:00.594574   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:00.609258   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:11:00.609330   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:00.620026   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:11:00.620089   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:00.630938   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:11:00.631011   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:00.644847   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:11:00.644915   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:00.654921   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:11:00.654986   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:00.664738   11947 logs.go:276] 0 containers: []
	W0701 05:11:00.664748   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:00.664798   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:00.676978   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:11:00.676996   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:11:00.677003   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:11:00.693783   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:00.693793   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:00.717670   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:11:00.717680   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:00.729894   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:00.729904   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:00.734689   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:11:00.734696   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:11:00.746685   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:11:00.746696   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:11:00.772607   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:11:00.772617   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:11:00.784408   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:00.784418   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:00.821466   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:11:00.821477   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:11:00.836137   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:11:00.836147   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:11:00.850119   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:11:00.850129   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:11:00.864886   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:11:00.864896   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:11:00.878396   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:00.878406   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:00.916825   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:11:00.916835   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:11:00.943459   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:11:00.943473   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:11:00.954760   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:11:00.954774   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:58.954740   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:03.470205   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:03.957083   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:03.957265   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:03.974225   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:03.974314   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:03.995487   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:03.995547   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:04.005619   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:04.005695   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:04.019713   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:04.019774   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:04.031096   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:04.031167   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:04.041485   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:04.041545   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:04.051574   11792 logs.go:276] 0 containers: []
	W0701 05:11:04.051587   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:04.051646   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:04.062130   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:04.062151   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:04.062156   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:04.105762   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:04.105773   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:04.118765   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:04.118774   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:04.132880   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:04.132891   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:04.144985   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:04.144996   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:04.160242   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:04.160251   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:04.177826   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:04.177836   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:04.197951   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:04.197960   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:04.223409   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:04.223419   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:04.259099   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:04.259107   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:04.275001   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:04.275011   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:04.280112   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:04.280121   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:04.294105   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:04.294118   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:04.309261   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:04.309275   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:04.321214   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:04.321227   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:06.840620   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:08.472484   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:08.472559   11947 kubeadm.go:591] duration metric: took 4m3.564007333s to restartPrimaryControlPlane
	W0701 05:11:08.472620   11947 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0701 05:11:08.472643   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0701 05:11:09.509165   11947 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036515375s)
	I0701 05:11:09.509240   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 05:11:09.514197   11947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:11:09.516966   11947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:11:09.519502   11947 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 05:11:09.519508   11947 kubeadm.go:156] found existing configuration files:
	
	I0701 05:11:09.519533   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0701 05:11:09.521906   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 05:11:09.521927   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:11:09.524582   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0701 05:11:09.527146   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 05:11:09.527175   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:11:09.530782   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0701 05:11:09.533697   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 05:11:09.533814   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:11:09.536822   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0701 05:11:09.539417   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 05:11:09.539439   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:11:09.542158   11947 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0701 05:11:09.559565   11947 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0701 05:11:09.559703   11947 kubeadm.go:309] [preflight] Running pre-flight checks
	I0701 05:11:09.610518   11947 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 05:11:09.610572   11947 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 05:11:09.610615   11947 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 05:11:09.661299   11947 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 05:11:09.665540   11947 out.go:204]   - Generating certificates and keys ...
	I0701 05:11:09.665576   11947 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0701 05:11:09.665610   11947 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0701 05:11:09.665647   11947 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0701 05:11:09.665685   11947 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0701 05:11:09.665718   11947 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0701 05:11:09.665756   11947 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0701 05:11:09.665788   11947 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0701 05:11:09.665825   11947 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0701 05:11:09.665864   11947 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0701 05:11:09.665901   11947 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0701 05:11:09.665918   11947 kubeadm.go:309] [certs] Using the existing "sa" key
	I0701 05:11:09.665949   11947 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 05:11:09.694473   11947 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 05:11:09.848613   11947 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 05:11:10.178440   11947 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 05:11:10.416487   11947 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 05:11:10.446926   11947 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 05:11:10.448401   11947 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 05:11:10.448424   11947 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0701 05:11:10.546518   11947 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 05:11:10.553435   11947 out.go:204]   - Booting up control plane ...
	I0701 05:11:10.553558   11947 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 05:11:10.553606   11947 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 05:11:10.553643   11947 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 05:11:10.553763   11947 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 05:11:10.553848   11947 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 05:11:11.842880   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:11.843061   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:11.854995   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:11.855070   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:11.866492   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:11.866565   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:11.877794   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:11.877867   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:11.889049   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:11.889112   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:11.903752   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:11.903822   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:11.915568   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:11.915637   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:11.926534   11792 logs.go:276] 0 containers: []
	W0701 05:11:11.926547   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:11.926609   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:11.937853   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:11.937870   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:11.937875   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:11.975182   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:11.975197   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:11.989925   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:11.989935   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:12.001926   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:12.001937   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:12.014497   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:12.014507   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:12.032614   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:12.032625   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:12.044478   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:12.044490   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:12.057116   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:12.057127   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:12.071927   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:12.071936   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:12.099097   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:12.099107   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:12.111073   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:12.111085   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:12.115903   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:12.115912   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:12.151007   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:12.151017   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:12.163737   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:12.163747   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:12.177999   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:12.178009   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:15.050542   11947 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504518 seconds
	I0701 05:11:15.050600   11947 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 05:11:15.054073   11947 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 05:11:15.570379   11947 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 05:11:15.570482   11947 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 05:11:16.075022   11947 kubeadm.go:309] [bootstrap-token] Using token: vt4d8y.l8stakfyrhjy34q0
	I0701 05:11:16.079249   11947 out.go:204]   - Configuring RBAC rules ...
	I0701 05:11:16.079309   11947 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 05:11:16.079365   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 05:11:16.085680   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 05:11:16.086725   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 05:11:16.087614   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 05:11:16.088452   11947 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 05:11:16.091737   11947 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 05:11:16.250837   11947 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0701 05:11:16.478676   11947 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0701 05:11:16.479337   11947 kubeadm.go:309] 
	I0701 05:11:16.479369   11947 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0701 05:11:16.479371   11947 kubeadm.go:309] 
	I0701 05:11:16.479420   11947 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0701 05:11:16.479424   11947 kubeadm.go:309] 
	I0701 05:11:16.479436   11947 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0701 05:11:16.479466   11947 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 05:11:16.479502   11947 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 05:11:16.479525   11947 kubeadm.go:309] 
	I0701 05:11:16.479638   11947 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0701 05:11:16.479647   11947 kubeadm.go:309] 
	I0701 05:11:16.479708   11947 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 05:11:16.479720   11947 kubeadm.go:309] 
	I0701 05:11:16.479762   11947 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0701 05:11:16.479814   11947 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 05:11:16.479863   11947 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 05:11:16.479869   11947 kubeadm.go:309] 
	I0701 05:11:16.479907   11947 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 05:11:16.480010   11947 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0701 05:11:16.480043   11947 kubeadm.go:309] 
	I0701 05:11:16.480149   11947 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vt4d8y.l8stakfyrhjy34q0 \
	I0701 05:11:16.480238   11947 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 \
	I0701 05:11:16.480252   11947 kubeadm.go:309] 	--control-plane 
	I0701 05:11:16.480256   11947 kubeadm.go:309] 
	I0701 05:11:16.480301   11947 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0701 05:11:16.480305   11947 kubeadm.go:309] 
	I0701 05:11:16.480400   11947 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vt4d8y.l8stakfyrhjy34q0 \
	I0701 05:11:16.480468   11947 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 
	I0701 05:11:16.480536   11947 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 05:11:16.480548   11947 cni.go:84] Creating CNI manager for ""
	I0701 05:11:16.480555   11947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:11:16.484333   11947 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 05:11:16.491274   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 05:11:16.494246   11947 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0701 05:11:16.499178   11947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 05:11:16.499220   11947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 05:11:16.499223   11947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-841000 minikube.k8s.io/updated_at=2024_07_01T05_11_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c minikube.k8s.io/name=stopped-upgrade-841000 minikube.k8s.io/primary=true
	I0701 05:11:16.537881   11947 ops.go:34] apiserver oom_adj: -16
	I0701 05:11:16.537958   11947 kubeadm.go:1107] duration metric: took 38.775417ms to wait for elevateKubeSystemPrivileges
	W0701 05:11:16.537974   11947 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0701 05:11:16.537979   11947 kubeadm.go:393] duration metric: took 4m11.6424805s to StartCluster
	I0701 05:11:16.537988   11947 settings.go:142] acquiring lock: {Name:mk8a5112b51a742a29c931ccf59ae86bde00a5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:11:16.538077   11947 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:11:16.538486   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:11:16.538712   11947 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:11:16.538770   11947 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 05:11:16.538801   11947 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-841000"
	I0701 05:11:16.538813   11947 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-841000"
	W0701 05:11:16.538817   11947 addons.go:243] addon storage-provisioner should already be in state true
	I0701 05:11:16.538820   11947 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:11:16.538822   11947 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-841000"
	I0701 05:11:16.538854   11947 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-841000"
	I0701 05:11:16.538828   11947 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0701 05:11:16.539296   11947 retry.go:31] will retry after 1.032144928s: connect: dial unix /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/monitor: connect: connection refused
	I0701 05:11:16.539979   11947 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105d4d090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:11:16.547496   11947 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-841000"
	W0701 05:11:16.547503   11947 addons.go:243] addon default-storageclass should already be in state true
	I0701 05:11:16.547514   11947 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0701 05:11:16.547564   11947 out.go:177] * Verifying Kubernetes components...
	I0701 05:11:16.548562   11947 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 05:11:16.548571   11947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 05:11:16.548587   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:11:16.551299   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:11:16.628436   11947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:11:16.633771   11947 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:11:16.633813   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:11:16.637658   11947 api_server.go:72] duration metric: took 98.937125ms to wait for apiserver process to appear ...
	I0701 05:11:16.637665   11947 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:11:16.637672   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:16.659408   11947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 05:11:17.577567   11947 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:11:14.692421   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:17.581597   11947 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:11:17.581603   11947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 05:11:17.581611   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:11:17.611451   11947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:11:21.650604   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:21.650647   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:19.700949   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:19.701216   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:19.723986   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:19.724110   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:19.739635   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:19.739716   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:19.753026   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:19.753091   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:19.764052   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:19.764113   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:19.774445   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:19.774504   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:19.785138   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:19.785196   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:19.795370   11792 logs.go:276] 0 containers: []
	W0701 05:11:19.795383   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:19.795438   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:19.806270   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:19.806286   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:19.806292   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:19.820591   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:19.820604   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:19.832237   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:19.832248   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:19.844825   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:19.844835   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:19.856244   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:19.856255   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:19.868382   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:19.868392   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:19.882302   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:19.882311   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:19.893860   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:19.893870   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:19.905934   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:19.905945   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:19.920659   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:19.920668   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:19.945816   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:19.945823   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:19.981239   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:19.981248   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:19.986197   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:19.986207   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:20.022066   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:20.022077   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:20.033648   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:20.033658   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:22.557324   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:26.660565   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:26.660621   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:27.568000   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:27.568087   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:27.578954   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:27.579025   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:27.589447   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:27.589514   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:27.600299   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:27.600373   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:27.611269   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:27.611339   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:27.621936   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:27.622013   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:27.632559   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:27.632628   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:27.644427   11792 logs.go:276] 0 containers: []
	W0701 05:11:27.644438   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:27.644491   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:27.655427   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:27.655445   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:27.655450   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:27.679148   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:27.679156   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:27.693890   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:27.693900   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:27.706419   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:27.706430   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:27.718291   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:27.718301   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:27.730470   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:27.730482   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:27.748970   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:27.748980   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:27.784754   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:27.784767   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:27.821142   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:27.821155   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:27.825885   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:27.825891   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:27.837626   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:27.837638   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:27.849292   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:27.849302   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:27.861505   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:27.861516   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:27.876043   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:27.876053   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:27.890575   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:27.890585   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:31.667956   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:31.668003   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:30.408014   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:36.673560   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:36.673603   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:35.415785   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:35.416011   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:35.447268   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:35.447371   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:35.463992   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:35.464071   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:35.479167   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:35.479247   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:35.490720   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:35.490790   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:35.500772   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:35.500844   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:35.514557   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:35.514627   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:35.524716   11792 logs.go:276] 0 containers: []
	W0701 05:11:35.524728   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:35.524785   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:35.535071   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:35.535090   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:35.535095   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:35.546282   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:35.546295   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:35.558360   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:35.558373   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:35.593161   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:35.593174   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:35.607796   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:35.607806   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:35.619566   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:35.619581   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:35.634344   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:35.634355   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:35.646007   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:35.646022   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:35.663660   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:35.663670   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:35.676536   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:35.676549   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:35.680999   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:35.681008   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:35.719402   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:35.719415   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:35.733191   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:35.733201   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:35.745096   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:35.745105   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:35.756883   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:35.756893   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:38.283211   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:41.677996   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:41.678037   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:43.288831   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:43.288954   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:43.300402   11792 logs.go:276] 1 containers: [1f556802cdce]
	I0701 05:11:43.300479   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:43.312449   11792 logs.go:276] 1 containers: [0c7f28971fad]
	I0701 05:11:43.312518   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:43.323576   11792 logs.go:276] 4 containers: [10c4852b4d2c 408434b9d5ff 83d49c28d07e 23db66bd25e4]
	I0701 05:11:43.323641   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:43.334250   11792 logs.go:276] 1 containers: [23201d8f9190]
	I0701 05:11:43.334315   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:43.344997   11792 logs.go:276] 1 containers: [b727f7da91a4]
	I0701 05:11:43.345064   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:43.363245   11792 logs.go:276] 1 containers: [47b1ac0ad61a]
	I0701 05:11:43.363308   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:43.373427   11792 logs.go:276] 0 containers: []
	W0701 05:11:43.373440   11792 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:43.373490   11792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:43.387510   11792 logs.go:276] 1 containers: [734c659ea499]
	I0701 05:11:43.387525   11792 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:43.387531   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:43.422696   11792 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:43.422703   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:43.426915   11792 logs.go:123] Gathering logs for etcd [0c7f28971fad] ...
	I0701 05:11:43.426920   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c7f28971fad"
	I0701 05:11:43.440524   11792 logs.go:123] Gathering logs for coredns [408434b9d5ff] ...
	I0701 05:11:43.440535   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 408434b9d5ff"
	I0701 05:11:43.451705   11792 logs.go:123] Gathering logs for coredns [23db66bd25e4] ...
	I0701 05:11:43.451716   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23db66bd25e4"
	I0701 05:11:43.463331   11792 logs.go:123] Gathering logs for kube-apiserver [1f556802cdce] ...
	I0701 05:11:43.463342   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f556802cdce"
	I0701 05:11:43.478608   11792 logs.go:123] Gathering logs for coredns [83d49c28d07e] ...
	I0701 05:11:43.478619   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83d49c28d07e"
	I0701 05:11:43.490146   11792 logs.go:123] Gathering logs for storage-provisioner [734c659ea499] ...
	I0701 05:11:43.490161   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734c659ea499"
	I0701 05:11:43.501560   11792 logs.go:123] Gathering logs for container status ...
	I0701 05:11:43.501569   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:43.513186   11792 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:43.513195   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:43.550158   11792 logs.go:123] Gathering logs for coredns [10c4852b4d2c] ...
	I0701 05:11:43.550168   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c4852b4d2c"
	I0701 05:11:43.562285   11792 logs.go:123] Gathering logs for kube-scheduler [23201d8f9190] ...
	I0701 05:11:43.562299   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23201d8f9190"
	I0701 05:11:43.577226   11792 logs.go:123] Gathering logs for kube-proxy [b727f7da91a4] ...
	I0701 05:11:43.577237   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b727f7da91a4"
	I0701 05:11:43.593372   11792 logs.go:123] Gathering logs for kube-controller-manager [47b1ac0ad61a] ...
	I0701 05:11:43.593386   11792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b1ac0ad61a"
	I0701 05:11:46.681545   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:46.681587   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0701 05:11:47.051078   11947 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0701 05:11:47.055481   11947 out.go:177] * Enabled addons: storage-provisioner
	I0701 05:11:47.068392   11947 addons.go:510] duration metric: took 30.490796917s for enable addons: enabled=[storage-provisioner]
	I0701 05:11:43.612171   11792 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:43.612180   11792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:46.139384   11792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:51.143270   11792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:51.147670   11792 out.go:177] 
	W0701 05:11:51.151688   11792 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0701 05:11:51.151697   11792 out.go:239] * 
	W0701 05:11:51.152350   11792 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:11:51.166597   11792 out.go:177] 
	I0701 05:11:51.684589   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:51.684612   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:56.687280   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:56.687301   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:01.689015   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:01.689054   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-01 12:02:49 UTC, ends at Mon 2024-07-01 12:12:07 UTC. --
	Jul 01 12:11:51 running-upgrade-803000 dockerd[3356]: time="2024-07-01T12:11:51.660755244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:11:51 running-upgrade-803000 dockerd[3356]: time="2024-07-01T12:11:51.660769160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:11:51 running-upgrade-803000 dockerd[3356]: time="2024-07-01T12:11:51.660862526Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5ac6bbf7ff34508a84f97f5d00ae1f4ae347e780240ec44b1e2177e33634a243 pid=18518 runtime=io.containerd.runc.v2
	Jul 01 12:11:51 running-upgrade-803000 dockerd[3356]: time="2024-07-01T12:11:51.660953476Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b9dd39a410d511ba6870ec00cced3a6a8fca072b7d4b64b2745afa7f0d82a569 pid=18523 runtime=io.containerd.runc.v2
	Jul 01 12:11:52 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:52Z" level=error msg="ContainerStats resp: {0x4000a00400 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x4000890f80 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x4000891400 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x400081f380 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x400081f7c0 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x4000898040 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x400024c480 linux}"
	Jul 01 12:11:53 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:53Z" level=error msg="ContainerStats resp: {0x4000898680 linux}"
	Jul 01 12:11:58 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:11:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 01 12:12:03 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 01 12:12:03 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:03Z" level=error msg="ContainerStats resp: {0x40008989c0 linux}"
	Jul 01 12:12:03 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:03Z" level=error msg="ContainerStats resp: {0x400081e0c0 linux}"
	Jul 01 12:12:04 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:04Z" level=error msg="ContainerStats resp: {0x400081f040 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x400081fe80 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x400024ca80 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x400024d080 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x4000886300 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x4000886940 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x400024cb40 linux}"
	Jul 01 12:12:05 running-upgrade-803000 cri-dockerd[3198]: time="2024-07-01T12:12:05Z" level=error msg="ContainerStats resp: {0x4000886e00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b9dd39a410d51       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   51a2e3dc2f277
	5ac6bbf7ff345       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   74248e243c73d
	10c4852b4d2c0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   51a2e3dc2f277
	408434b9d5fff       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   74248e243c73d
	b727f7da91a46       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   83edb128906c5
	734c659ea4990       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   94a36bc02609a
	47b1ac0ad61ad       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   9ff9ad73f6db8
	23201d8f91906       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   6591039b46cef
	1f556802cdce1       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   50c7945dfe0b7
	0c7f28971fad0       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   97ba1ad4d15b0
	
	
	==> coredns [10c4852b4d2c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:57509->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:49121->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:41092->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:38581->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:35896->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:47287->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:46224->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:58833->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:53507->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5442799016333310150.1988522244053292260. HINFO: read udp 10.244.0.2:59772->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [408434b9d5ff] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:50346->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:59135->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:48674->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:51798->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:38520->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:36657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:36721->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:34008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5511522197638881923.1347086654171943807. HINFO: read udp 10.244.0.3:43820->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5ac6bbf7ff34] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2525634133737530315.8603493614409447832. HINFO: read udp 10.244.0.3:58329->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2525634133737530315.8603493614409447832. HINFO: read udp 10.244.0.3:56650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2525634133737530315.8603493614409447832. HINFO: read udp 10.244.0.3:36159->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b9dd39a410d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4979430153254369930.8519999124694025871. HINFO: read udp 10.244.0.2:55172->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4979430153254369930.8519999124694025871. HINFO: read udp 10.244.0.2:36620->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4979430153254369930.8519999124694025871. HINFO: read udp 10.244.0.2:36935->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-803000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-803000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=running-upgrade-803000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T05_07_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:07:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-803000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:12:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:07:50 +0000   Mon, 01 Jul 2024 12:07:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:07:50 +0000   Mon, 01 Jul 2024 12:07:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:07:50 +0000   Mon, 01 Jul 2024 12:07:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:07:50 +0000   Mon, 01 Jul 2024 12:07:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-803000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7199174adec453b856cbcd97ee66299
	  System UUID:                d7199174adec453b856cbcd97ee66299
	  Boot ID:                    8927cb6d-d9cb-487b-8bb0-20c0b4cf99c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-l8fhw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-vdtjx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-803000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-apiserver-running-upgrade-803000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-803000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-sp4sz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-803000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-803000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-803000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-803000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-803000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-803000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-803000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-803000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-803000 event: Registered Node running-upgrade-803000 in Controller
	
	
	==> dmesg <==
	[Jul 1 12:03] systemd-fstab-generator[881]: Ignoring "noauto" for root device
	[  +0.078381] systemd-fstab-generator[892]: Ignoring "noauto" for root device
	[  +0.067830] systemd-fstab-generator[903]: Ignoring "noauto" for root device
	[  +1.140737] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.074453] systemd-fstab-generator[1053]: Ignoring "noauto" for root device
	[  +0.077633] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +2.621992] systemd-fstab-generator[1292]: Ignoring "noauto" for root device
	[ +21.201587] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +2.434670] systemd-fstab-generator[2369]: Ignoring "noauto" for root device
	[  +0.150608] systemd-fstab-generator[2403]: Ignoring "noauto" for root device
	[  +0.088727] systemd-fstab-generator[2416]: Ignoring "noauto" for root device
	[  +0.097677] systemd-fstab-generator[2430]: Ignoring "noauto" for root device
	[  +2.685591] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.211179] systemd-fstab-generator[3154]: Ignoring "noauto" for root device
	[  +0.076701] systemd-fstab-generator[3166]: Ignoring "noauto" for root device
	[  +0.082646] systemd-fstab-generator[3177]: Ignoring "noauto" for root device
	[  +0.088730] systemd-fstab-generator[3191]: Ignoring "noauto" for root device
	[  +2.297260] systemd-fstab-generator[3343]: Ignoring "noauto" for root device
	[  +2.753424] systemd-fstab-generator[3932]: Ignoring "noauto" for root device
	[  +1.236343] systemd-fstab-generator[4162]: Ignoring "noauto" for root device
	[ +21.258267] kauditd_printk_skb: 68 callbacks suppressed
	[Jul 1 12:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.283819] systemd-fstab-generator[11579]: Ignoring "noauto" for root device
	[  +5.652078] systemd-fstab-generator[12177]: Ignoring "noauto" for root device
	[  +0.454425] systemd-fstab-generator[12308]: Ignoring "noauto" for root device
	
	
	==> etcd [0c7f28971fad] <==
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-01T12:07:45.584Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-803000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T12:07:46.283Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-01T12:07:46.284Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-01T12:07:46.285Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-01T12:07:46.286Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-01T12:07:46.285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T12:07:46.285Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-01T12:07:46.290Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-01T12:07:46.291Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T12:07:46.291Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 12:12:07 up 9 min,  0 users,  load average: 0.18, 0.30, 0.18
	Linux running-upgrade-803000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1f556802cdce] <==
	I0701 12:07:47.578783       1 cache.go:39] Caches are synced for autoregister controller
	I0701 12:07:47.578831       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 12:07:47.579313       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 12:07:47.579389       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 12:07:47.579518       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 12:07:47.609259       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 12:07:47.618702       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 12:07:48.310353       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 12:07:48.491709       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 12:07:48.498028       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 12:07:48.498055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 12:07:48.651081       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 12:07:48.660942       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 12:07:48.755379       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 12:07:48.757288       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0701 12:07:48.757652       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 12:07:48.758803       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 12:07:49.623739       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 12:07:50.202098       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 12:07:50.206238       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 12:07:50.217997       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 12:07:50.244330       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 12:08:03.278612       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 12:08:03.379174       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 12:08:04.060147       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [47b1ac0ad61a] <==
	W0701 12:08:02.677996       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-803000. Assuming now as a timestamp.
	I0701 12:08:02.678039       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0701 12:08:02.678040       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0701 12:08:02.678172       1 event.go:294] "Event occurred" object="running-upgrade-803000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-803000 event: Registered Node running-upgrade-803000 in Controller"
	I0701 12:08:02.682175       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 12:08:02.682514       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 12:08:02.692103       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0701 12:08:02.704270       1 shared_informer.go:262] Caches are synced for job
	I0701 12:08:02.706990       1 shared_informer.go:262] Caches are synced for deployment
	I0701 12:08:02.710127       1 shared_informer.go:262] Caches are synced for attach detach
	I0701 12:08:02.724430       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0701 12:08:02.724469       1 shared_informer.go:262] Caches are synced for HPA
	I0701 12:08:02.724487       1 shared_informer.go:262] Caches are synced for persistent volume
	I0701 12:08:02.724523       1 shared_informer.go:262] Caches are synced for PVC protection
	I0701 12:08:02.724637       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 12:08:02.727401       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 12:08:02.727452       1 shared_informer.go:262] Caches are synced for disruption
	I0701 12:08:02.727464       1 disruption.go:371] Sending events to api server.
	I0701 12:08:03.091348       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 12:08:03.097355       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 12:08:03.097366       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 12:08:03.279879       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 12:08:03.381706       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sp4sz"
	I0701 12:08:03.480752       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vdtjx"
	I0701 12:08:03.484770       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-l8fhw"
	
	
	==> kube-proxy [b727f7da91a4] <==
	I0701 12:08:03.969392       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0701 12:08:03.969433       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0701 12:08:03.969730       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 12:08:04.057316       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0701 12:08:04.057334       1 server_others.go:206] "Using iptables Proxier"
	I0701 12:08:04.057377       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 12:08:04.057603       1 server.go:661] "Version info" version="v1.24.1"
	I0701 12:08:04.057611       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:08:04.057967       1 config.go:317] "Starting service config controller"
	I0701 12:08:04.057982       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 12:08:04.057992       1 config.go:226] "Starting endpoint slice config controller"
	I0701 12:08:04.057995       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 12:08:04.058291       1 config.go:444] "Starting node config controller"
	I0701 12:08:04.058295       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 12:08:04.158757       1 shared_informer.go:262] Caches are synced for node config
	I0701 12:08:04.158765       1 shared_informer.go:262] Caches are synced for service config
	I0701 12:08:04.158755       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [23201d8f9190] <==
	W0701 12:07:47.532871       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 12:07:47.532890       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 12:07:47.533059       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 12:07:47.533095       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 12:07:47.533126       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:07:47.533159       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 12:07:47.533190       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 12:07:47.533206       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 12:07:47.533404       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 12:07:47.533426       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 12:07:47.533661       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:07:47.533682       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:07:47.533741       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 12:07:47.533762       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 12:07:47.533804       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 12:07:47.533827       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 12:07:47.534038       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 12:07:47.534060       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 12:07:48.401969       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 12:07:48.402030       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 12:07:48.488901       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 12:07:48.489132       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 12:07:48.510933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 12:07:48.511007       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0701 12:07:48.729006       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-01 12:02:49 UTC, ends at Mon 2024-07-01 12:12:07 UTC. --
	Jul 01 12:07:52 running-upgrade-803000 kubelet[12183]: E0701 12:07:52.032764   12183 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-803000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-803000"
	Jul 01 12:07:52 running-upgrade-803000 kubelet[12183]: E0701 12:07:52.232928   12183 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-803000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-803000"
	Jul 01 12:07:52 running-upgrade-803000 kubelet[12183]: I0701 12:07:52.430896   12183 request.go:601] Waited for 1.12370258s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 01 12:07:52 running-upgrade-803000 kubelet[12183]: E0701 12:07:52.434431   12183 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-803000\" already exists" pod="kube-system/etcd-running-upgrade-803000"
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: I0701 12:08:02.429429   12183 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: I0701 12:08:02.429755   12183 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: I0701 12:08:02.685617   12183 topology_manager.go:200] "Topology Admit Handler"
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: I0701 12:08:02.831841   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5fca3ff3-729e-4fa3-a023-4214bcb9b84b-tmp\") pod \"storage-provisioner\" (UID: \"5fca3ff3-729e-4fa3-a023-4214bcb9b84b\") " pod="kube-system/storage-provisioner"
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: I0701 12:08:02.831870   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x46x\" (UniqueName: \"kubernetes.io/projected/5fca3ff3-729e-4fa3-a023-4214bcb9b84b-kube-api-access-2x46x\") pod \"storage-provisioner\" (UID: \"5fca3ff3-729e-4fa3-a023-4214bcb9b84b\") " pod="kube-system/storage-provisioner"
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: E0701 12:08:02.938040   12183 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: E0701 12:08:02.938062   12183 projected.go:192] Error preparing data for projected volume kube-api-access-2x46x for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 01 12:08:02 running-upgrade-803000 kubelet[12183]: E0701 12:08:02.938099   12183 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/5fca3ff3-729e-4fa3-a023-4214bcb9b84b-kube-api-access-2x46x podName:5fca3ff3-729e-4fa3-a023-4214bcb9b84b nodeName:}" failed. No retries permitted until 2024-07-01 12:08:03.438086057 +0000 UTC m=+13.246579263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2x46x" (UniqueName: "kubernetes.io/projected/5fca3ff3-729e-4fa3-a023-4214bcb9b84b-kube-api-access-2x46x") pod "storage-provisioner" (UID: "5fca3ff3-729e-4fa3-a023-4214bcb9b84b") : configmap "kube-root-ca.crt" not found
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.384578   12183 topology_manager.go:200] "Topology Admit Handler"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.483313   12183 topology_manager.go:200] "Topology Admit Handler"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.486127   12183 topology_manager.go:200] "Topology Admit Handler"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.535082   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/693a86ae-e8d0-49c6-910e-892c0d52e321-kube-proxy\") pod \"kube-proxy-sp4sz\" (UID: \"693a86ae-e8d0-49c6-910e-892c0d52e321\") " pod="kube-system/kube-proxy-sp4sz"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.535114   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw65g\" (UniqueName: \"kubernetes.io/projected/693a86ae-e8d0-49c6-910e-892c0d52e321-kube-api-access-kw65g\") pod \"kube-proxy-sp4sz\" (UID: \"693a86ae-e8d0-49c6-910e-892c0d52e321\") " pod="kube-system/kube-proxy-sp4sz"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.535133   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693a86ae-e8d0-49c6-910e-892c0d52e321-xtables-lock\") pod \"kube-proxy-sp4sz\" (UID: \"693a86ae-e8d0-49c6-910e-892c0d52e321\") " pod="kube-system/kube-proxy-sp4sz"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.535142   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693a86ae-e8d0-49c6-910e-892c0d52e321-lib-modules\") pod \"kube-proxy-sp4sz\" (UID: \"693a86ae-e8d0-49c6-910e-892c0d52e321\") " pod="kube-system/kube-proxy-sp4sz"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.635659   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51bc62e1-e91a-44b6-a3c8-818d107af385-config-volume\") pod \"coredns-6d4b75cb6d-vdtjx\" (UID: \"51bc62e1-e91a-44b6-a3c8-818d107af385\") " pod="kube-system/coredns-6d4b75cb6d-vdtjx"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.635679   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c2bc97a-c42a-4fa5-a09b-145fe4491527-config-volume\") pod \"coredns-6d4b75cb6d-l8fhw\" (UID: \"3c2bc97a-c42a-4fa5-a09b-145fe4491527\") " pod="kube-system/coredns-6d4b75cb6d-l8fhw"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.635696   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnng\" (UniqueName: \"kubernetes.io/projected/3c2bc97a-c42a-4fa5-a09b-145fe4491527-kube-api-access-mvnng\") pod \"coredns-6d4b75cb6d-l8fhw\" (UID: \"3c2bc97a-c42a-4fa5-a09b-145fe4491527\") " pod="kube-system/coredns-6d4b75cb6d-l8fhw"
	Jul 01 12:08:03 running-upgrade-803000 kubelet[12183]: I0701 12:08:03.635709   12183 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrh6\" (UniqueName: \"kubernetes.io/projected/51bc62e1-e91a-44b6-a3c8-818d107af385-kube-api-access-xsrh6\") pod \"coredns-6d4b75cb6d-vdtjx\" (UID: \"51bc62e1-e91a-44b6-a3c8-818d107af385\") " pod="kube-system/coredns-6d4b75cb6d-vdtjx"
	Jul 01 12:11:51 running-upgrade-803000 kubelet[12183]: I0701 12:11:51.710541   12183 scope.go:110] "RemoveContainer" containerID="83d49c28d07ea67c382232780f1ef2c7023463f0be1c4207bed442d850510754"
	Jul 01 12:11:51 running-upgrade-803000 kubelet[12183]: I0701 12:11:51.727119   12183 scope.go:110] "RemoveContainer" containerID="23db66bd25e4b81e1e06714b73f3937a1e9324726d5dbb7c73cd44c554f9d89a"
	
	
	==> storage-provisioner [734c659ea499] <==
	I0701 12:08:03.912620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 12:08:03.924051       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 12:08:03.924271       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 12:08:03.930648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 12:08:03.932419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-803000_19deb5e5-d8f9-4b42-a3d7-f27eae0d8333!
	I0701 12:08:03.934847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49c5dfbf-8377-478a-b190-739cc37602dc", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-803000_19deb5e5-d8f9-4b42-a3d7-f27eae0d8333 became leader
	I0701 12:08:04.040119       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-803000_19deb5e5-d8f9-4b42-a3d7-f27eae0d8333!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-803000 -n running-upgrade-803000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-803000 -n running-upgrade-803000: exit status 2 (15.581531875s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-803000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-803000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-803000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-803000: (1.242154833s)
--- FAIL: TestRunningBinaryUpgrade (600.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-161000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-161000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.765515625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-161000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-161000" primary control-plane node in "kubernetes-upgrade-161000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-161000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:05:24.490188   11869 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:05:24.490387   11869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:05:24.490390   11869 out.go:304] Setting ErrFile to fd 2...
	I0701 05:05:24.490392   11869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:05:24.490522   11869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:05:24.491787   11869 out.go:298] Setting JSON to false
	I0701 05:05:24.508559   11869 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7493,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:05:24.508638   11869 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:05:24.513194   11869 out.go:177] * [kubernetes-upgrade-161000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:05:24.520180   11869 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:05:24.520222   11869 notify.go:220] Checking for updates...
	I0701 05:05:24.527095   11869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:05:24.530118   11869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:05:24.533182   11869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:05:24.536125   11869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:05:24.539123   11869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:05:24.542407   11869 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:05:24.542473   11869 config.go:182] Loaded profile config "running-upgrade-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:05:24.542530   11869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:05:24.547154   11869 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:05:24.554140   11869 start.go:297] selected driver: qemu2
	I0701 05:05:24.554146   11869 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:05:24.554151   11869 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:05:24.556261   11869 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:05:24.559060   11869 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:05:24.562204   11869 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 05:05:24.562227   11869 cni.go:84] Creating CNI manager for ""
	I0701 05:05:24.562233   11869 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0701 05:05:24.562259   11869 start.go:340] cluster config:
	{Name:kubernetes-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:05:24.565743   11869 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:05:24.577182   11869 out.go:177] * Starting "kubernetes-upgrade-161000" primary control-plane node in "kubernetes-upgrade-161000" cluster
	I0701 05:05:24.581084   11869 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 05:05:24.581099   11869 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 05:05:24.581109   11869 cache.go:56] Caching tarball of preloaded images
	I0701 05:05:24.581166   11869 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:05:24.581176   11869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0701 05:05:24.581238   11869 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kubernetes-upgrade-161000/config.json ...
	I0701 05:05:24.581249   11869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kubernetes-upgrade-161000/config.json: {Name:mkec64c69550d1082c24dd66686ae591b06f2d81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:05:24.581596   11869 start.go:360] acquireMachinesLock for kubernetes-upgrade-161000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:05:24.581628   11869 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "kubernetes-upgrade-161000"
	I0701 05:05:24.581640   11869 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:05:24.581662   11869 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:05:24.585122   11869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:05:24.610558   11869 start.go:159] libmachine.API.Create for "kubernetes-upgrade-161000" (driver="qemu2")
	I0701 05:05:24.610581   11869 client.go:168] LocalClient.Create starting
	I0701 05:05:24.610659   11869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:05:24.610692   11869 main.go:141] libmachine: Decoding PEM data...
	I0701 05:05:24.610700   11869 main.go:141] libmachine: Parsing certificate...
	I0701 05:05:24.610745   11869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:05:24.610768   11869 main.go:141] libmachine: Decoding PEM data...
	I0701 05:05:24.610776   11869 main.go:141] libmachine: Parsing certificate...
	I0701 05:05:24.611114   11869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:05:24.756561   11869 main.go:141] libmachine: Creating SSH key...
	I0701 05:05:24.814420   11869 main.go:141] libmachine: Creating Disk image...
	I0701 05:05:24.814426   11869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:05:24.814605   11869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:24.824327   11869 main.go:141] libmachine: STDOUT: 
	I0701 05:05:24.824346   11869 main.go:141] libmachine: STDERR: 
	I0701 05:05:24.824396   11869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2 +20000M
	I0701 05:05:24.832429   11869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:05:24.832440   11869 main.go:141] libmachine: STDERR: 
	I0701 05:05:24.832459   11869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:24.832465   11869 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:05:24.832496   11869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:20:75:d3:9e:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:24.834011   11869 main.go:141] libmachine: STDOUT: 
	I0701 05:05:24.834024   11869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:05:24.834042   11869 client.go:171] duration metric: took 223.457792ms to LocalClient.Create
	I0701 05:05:26.836228   11869 start.go:128] duration metric: took 2.254547875s to createHost
	I0701 05:05:26.836307   11869 start.go:83] releasing machines lock for "kubernetes-upgrade-161000", held for 2.254679084s
	W0701 05:05:26.836396   11869 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:05:26.843728   11869 out.go:177] * Deleting "kubernetes-upgrade-161000" in qemu2 ...
	W0701 05:05:26.869754   11869 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:05:26.869791   11869 start.go:728] Will try again in 5 seconds ...
	I0701 05:05:31.872031   11869 start.go:360] acquireMachinesLock for kubernetes-upgrade-161000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:05:31.872593   11869 start.go:364] duration metric: took 465.333µs to acquireMachinesLock for "kubernetes-upgrade-161000"
	I0701 05:05:31.872674   11869 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:05:31.872922   11869 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:05:31.891675   11869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:05:31.942711   11869 start.go:159] libmachine.API.Create for "kubernetes-upgrade-161000" (driver="qemu2")
	I0701 05:05:31.942772   11869 client.go:168] LocalClient.Create starting
	I0701 05:05:31.942900   11869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:05:31.942975   11869 main.go:141] libmachine: Decoding PEM data...
	I0701 05:05:31.942991   11869 main.go:141] libmachine: Parsing certificate...
	I0701 05:05:31.943051   11869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:05:31.943095   11869 main.go:141] libmachine: Decoding PEM data...
	I0701 05:05:31.943109   11869 main.go:141] libmachine: Parsing certificate...
	I0701 05:05:31.943671   11869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:05:32.084824   11869 main.go:141] libmachine: Creating SSH key...
	I0701 05:05:32.167864   11869 main.go:141] libmachine: Creating Disk image...
	I0701 05:05:32.167870   11869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:05:32.168025   11869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:32.177501   11869 main.go:141] libmachine: STDOUT: 
	I0701 05:05:32.177519   11869 main.go:141] libmachine: STDERR: 
	I0701 05:05:32.177570   11869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2 +20000M
	I0701 05:05:32.185748   11869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:05:32.185760   11869 main.go:141] libmachine: STDERR: 
	I0701 05:05:32.185771   11869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:32.185776   11869 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:05:32.185815   11869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:fe:1d:2d:6c:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:32.187440   11869 main.go:141] libmachine: STDOUT: 
	I0701 05:05:32.187453   11869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:05:32.187464   11869 client.go:171] duration metric: took 244.686125ms to LocalClient.Create
	I0701 05:05:34.189534   11869 start.go:128] duration metric: took 2.316606584s to createHost
	I0701 05:05:34.189592   11869 start.go:83] releasing machines lock for "kubernetes-upgrade-161000", held for 2.316959542s
	W0701 05:05:34.189708   11869 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-161000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-161000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:05:34.199915   11869 out.go:177] 
	W0701 05:05:34.207922   11869 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:05:34.207932   11869 out.go:239] * 
	* 
	W0701 05:05:34.208784   11869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:05:34.217892   11869 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-161000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-161000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-161000: (3.327908666s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-161000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-161000 status --format={{.Host}}: exit status 7 (57.101792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-161000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-161000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184638458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-161000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-161000" primary control-plane node in "kubernetes-upgrade-161000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-161000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-161000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:05:37.642835   11905 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:05:37.642967   11905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:05:37.642970   11905 out.go:304] Setting ErrFile to fd 2...
	I0701 05:05:37.642973   11905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:05:37.643096   11905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:05:37.644109   11905 out.go:298] Setting JSON to false
	I0701 05:05:37.660638   11905 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7506,"bootTime":1719828031,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:05:37.660697   11905 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:05:37.665068   11905 out.go:177] * [kubernetes-upgrade-161000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:05:37.671670   11905 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:05:37.671765   11905 notify.go:220] Checking for updates...
	I0701 05:05:37.678693   11905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:05:37.681690   11905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:05:37.684751   11905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:05:37.687665   11905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:05:37.690722   11905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:05:37.693970   11905 config.go:182] Loaded profile config "kubernetes-upgrade-161000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0701 05:05:37.694246   11905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:05:37.697589   11905 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:05:37.704661   11905 start.go:297] selected driver: qemu2
	I0701 05:05:37.704666   11905 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:05:37.704715   11905 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:05:37.706923   11905 cni.go:84] Creating CNI manager for ""
	I0701 05:05:37.706937   11905 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:05:37.706957   11905 start.go:340] cluster config:
	{Name:kubernetes-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-161000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:05:37.710174   11905 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:05:37.717698   11905 out.go:177] * Starting "kubernetes-upgrade-161000" primary control-plane node in "kubernetes-upgrade-161000" cluster
	I0701 05:05:37.721705   11905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:05:37.721718   11905 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:05:37.721725   11905 cache.go:56] Caching tarball of preloaded images
	I0701 05:05:37.721775   11905 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:05:37.721780   11905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:05:37.721821   11905 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kubernetes-upgrade-161000/config.json ...
	I0701 05:05:37.722266   11905 start.go:360] acquireMachinesLock for kubernetes-upgrade-161000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:05:37.722293   11905 start.go:364] duration metric: took 21.208µs to acquireMachinesLock for "kubernetes-upgrade-161000"
	I0701 05:05:37.722302   11905 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:05:37.722307   11905 fix.go:54] fixHost starting: 
	I0701 05:05:37.722423   11905 fix.go:112] recreateIfNeeded on kubernetes-upgrade-161000: state=Stopped err=<nil>
	W0701 05:05:37.722431   11905 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:05:37.730701   11905 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-161000" ...
	I0701 05:05:37.734720   11905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:fe:1d:2d:6c:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:37.736482   11905 main.go:141] libmachine: STDOUT: 
	I0701 05:05:37.736500   11905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:05:37.736526   11905 fix.go:56] duration metric: took 14.219292ms for fixHost
	I0701 05:05:37.736529   11905 start.go:83] releasing machines lock for "kubernetes-upgrade-161000", held for 14.232292ms
	W0701 05:05:37.736535   11905 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:05:37.736570   11905 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:05:37.736574   11905 start.go:728] Will try again in 5 seconds ...
	I0701 05:05:42.738747   11905 start.go:360] acquireMachinesLock for kubernetes-upgrade-161000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:05:42.739253   11905 start.go:364] duration metric: took 407.5µs to acquireMachinesLock for "kubernetes-upgrade-161000"
	I0701 05:05:42.739423   11905 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:05:42.739445   11905 fix.go:54] fixHost starting: 
	I0701 05:05:42.740213   11905 fix.go:112] recreateIfNeeded on kubernetes-upgrade-161000: state=Stopped err=<nil>
	W0701 05:05:42.740240   11905 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:05:42.748706   11905 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-161000" ...
	I0701 05:05:42.752028   11905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:fe:1d:2d:6c:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubernetes-upgrade-161000/disk.qcow2
	I0701 05:05:42.761883   11905 main.go:141] libmachine: STDOUT: 
	I0701 05:05:42.761941   11905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:05:42.762025   11905 fix.go:56] duration metric: took 22.582542ms for fixHost
	I0701 05:05:42.762042   11905 start.go:83] releasing machines lock for "kubernetes-upgrade-161000", held for 22.762083ms
	W0701 05:05:42.762311   11905 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-161000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-161000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:05:42.770644   11905 out.go:177] 
	W0701 05:05:42.773750   11905 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:05:42.773823   11905 out.go:239] * 
	* 
	W0701 05:05:42.775657   11905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:05:42.784709   11905 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-161000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-161000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-161000 version --output=json: exit status 1 (64.435875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-161000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-01 05:05:42.864506 -0700 PDT m=+963.443896168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-161000 -n kubernetes-upgrade-161000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-161000 -n kubernetes-upgrade-161000: exit status 7 (33.433834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-161000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-161000
--- FAIL: TestKubernetesUpgrade (18.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19166
- KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1197943436/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.94s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19166
- KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1309700543/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4125809536 start -p stopped-upgrade-841000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4125809536 start -p stopped-upgrade-841000 --memory=2200 --vm-driver=qemu2 : (40.528224s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4125809536 -p stopped-upgrade-841000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4125809536 -p stopped-upgrade-841000 stop: (12.120332417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-841000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-841000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.5448325s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-841000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-841000" primary control-plane node in "stopped-upgrade-841000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-841000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:06:37.128534   11947 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:06:37.128714   11947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:06:37.128718   11947 out.go:304] Setting ErrFile to fd 2...
	I0701 05:06:37.128721   11947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:06:37.128870   11947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:06:37.130072   11947 out.go:298] Setting JSON to false
	I0701 05:06:37.149587   11947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7566,"bootTime":1719828031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:06:37.149665   11947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:06:37.154020   11947 out.go:177] * [stopped-upgrade-841000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:06:37.160137   11947 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:06:37.160192   11947 notify.go:220] Checking for updates...
	I0701 05:06:37.167054   11947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:06:37.170034   11947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:06:37.173121   11947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:06:37.176023   11947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:06:37.179084   11947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:06:37.182363   11947 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:06:37.185999   11947 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0701 05:06:37.189080   11947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:06:37.193032   11947 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:06:37.200069   11947 start.go:297] selected driver: qemu2
	I0701 05:06:37.200078   11947 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:06:37.200143   11947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:06:37.202566   11947 cni.go:84] Creating CNI manager for ""
	I0701 05:06:37.202586   11947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:06:37.202621   11947 start.go:340] cluster config:
	{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:06:37.202679   11947 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:06:37.210029   11947 out.go:177] * Starting "stopped-upgrade-841000" primary control-plane node in "stopped-upgrade-841000" cluster
	I0701 05:06:37.214045   11947 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0701 05:06:37.214065   11947 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0701 05:06:37.214076   11947 cache.go:56] Caching tarball of preloaded images
	I0701 05:06:37.214157   11947 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:06:37.214163   11947 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0701 05:06:37.214228   11947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0701 05:06:37.214696   11947 start.go:360] acquireMachinesLock for stopped-upgrade-841000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:06:37.214737   11947 start.go:364] duration metric: took 34.125µs to acquireMachinesLock for "stopped-upgrade-841000"
	I0701 05:06:37.214748   11947 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:06:37.214753   11947 fix.go:54] fixHost starting: 
	I0701 05:06:37.214876   11947 fix.go:112] recreateIfNeeded on stopped-upgrade-841000: state=Stopped err=<nil>
	W0701 05:06:37.214885   11947 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:06:37.222112   11947 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-841000" ...
	I0701 05:06:37.226089   11947 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52333-:22,hostfwd=tcp::52334-:2376,hostname=stopped-upgrade-841000 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/disk.qcow2
	I0701 05:06:37.273786   11947 main.go:141] libmachine: STDOUT: 
	I0701 05:06:37.273811   11947 main.go:141] libmachine: STDERR: 
	I0701 05:06:37.273816   11947 main.go:141] libmachine: Waiting for VM to start (ssh -p 52333 docker@127.0.0.1)...
	I0701 05:06:56.396607   11947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0701 05:06:56.397106   11947 machine.go:94] provisionDockerMachine start ...
	I0701 05:06:56.397200   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.397476   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.397486   11947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 05:06:56.462254   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 05:06:56.462287   11947 buildroot.go:166] provisioning hostname "stopped-upgrade-841000"
	I0701 05:06:56.462366   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.462554   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.462561   11947 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-841000 && echo "stopped-upgrade-841000" | sudo tee /etc/hostname
	I0701 05:06:56.525951   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-841000
	
	I0701 05:06:56.526001   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.526140   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.526150   11947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 05:06:56.583129   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 05:06:56.583143   11947 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19166-9507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19166-9507/.minikube}
	I0701 05:06:56.583151   11947 buildroot.go:174] setting up certificates
	I0701 05:06:56.583159   11947 provision.go:84] configureAuth start
	I0701 05:06:56.583165   11947 provision.go:143] copyHostCerts
	I0701 05:06:56.583255   11947 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem, removing ...
	I0701 05:06:56.583262   11947 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem
	I0701 05:06:56.583365   11947 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.pem (1082 bytes)
	I0701 05:06:56.583560   11947 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem, removing ...
	I0701 05:06:56.583564   11947 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem
	I0701 05:06:56.583618   11947 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/cert.pem (1123 bytes)
	I0701 05:06:56.583733   11947 exec_runner.go:144] found /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem, removing ...
	I0701 05:06:56.583737   11947 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem
	I0701 05:06:56.583790   11947 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19166-9507/.minikube/key.pem (1679 bytes)
	I0701 05:06:56.583878   11947 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-841000 san=[127.0.0.1 localhost minikube stopped-upgrade-841000]
	I0701 05:06:56.701912   11947 provision.go:177] copyRemoteCerts
	I0701 05:06:56.701955   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 05:06:56.701964   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:06:56.730000   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 05:06:56.736849   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 05:06:56.746011   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0701 05:06:56.752797   11947 provision.go:87] duration metric: took 169.627458ms to configureAuth
	I0701 05:06:56.752805   11947 buildroot.go:189] setting minikube options for container-runtime
	I0701 05:06:56.752935   11947 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:06:56.752966   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.753068   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.753072   11947 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 05:06:56.804644   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 05:06:56.804656   11947 buildroot.go:70] root file system type: tmpfs
	I0701 05:06:56.804706   11947 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 05:06:56.804761   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.804865   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.804897   11947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 05:06:56.860187   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 05:06:56.860246   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:56.860352   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:56.860360   11947 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 05:06:57.231922   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 05:06:57.231936   11947 machine.go:97] duration metric: took 834.824292ms to provisionDockerMachine
	I0701 05:06:57.231942   11947 start.go:293] postStartSetup for "stopped-upgrade-841000" (driver="qemu2")
	I0701 05:06:57.231949   11947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 05:06:57.232017   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 05:06:57.232026   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:06:57.261286   11947 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 05:06:57.262585   11947 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 05:06:57.262593   11947 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19166-9507/.minikube/addons for local assets ...
	I0701 05:06:57.262672   11947 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19166-9507/.minikube/files for local assets ...
	I0701 05:06:57.262790   11947 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem -> 100032.pem in /etc/ssl/certs
	I0701 05:06:57.262927   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 05:06:57.265956   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem --> /etc/ssl/certs/100032.pem (1708 bytes)
	I0701 05:06:57.272947   11947 start.go:296] duration metric: took 40.999208ms for postStartSetup
	I0701 05:06:57.272960   11947 fix.go:56] duration metric: took 20.058293875s for fixHost
	I0701 05:06:57.273003   11947 main.go:141] libmachine: Using SSH client type: native
	I0701 05:06:57.273125   11947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049be8e0] 0x1049c1140 <nil>  [] 0s} localhost 52333 <nil> <nil>}
	I0701 05:06:57.273130   11947 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0701 05:06:57.324390   11947 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719835617.217066337
	
	I0701 05:06:57.324399   11947 fix.go:216] guest clock: 1719835617.217066337
	I0701 05:06:57.324402   11947 fix.go:229] Guest: 2024-07-01 05:06:57.217066337 -0700 PDT Remote: 2024-07-01 05:06:57.272962 -0700 PDT m=+20.178339418 (delta=-55.895663ms)
	I0701 05:06:57.324413   11947 fix.go:200] guest clock delta is within tolerance: -55.895663ms
	I0701 05:06:57.324416   11947 start.go:83] releasing machines lock for "stopped-upgrade-841000", held for 20.109759042s
	I0701 05:06:57.324485   11947 ssh_runner.go:195] Run: cat /version.json
	I0701 05:06:57.324499   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:06:57.324485   11947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 05:06:57.324585   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	W0701 05:06:57.325129   11947 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52459->127.0.0.1:52333: read: connection reset by peer
	I0701 05:06:57.325149   11947 retry.go:31] will retry after 274.068046ms: ssh: handshake failed: read tcp 127.0.0.1:52459->127.0.0.1:52333: read: connection reset by peer
	W0701 05:06:57.352775   11947 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0701 05:06:57.352820   11947 ssh_runner.go:195] Run: systemctl --version
	I0701 05:06:57.354391   11947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 05:06:57.355961   11947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 05:06:57.355985   11947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0701 05:06:57.358983   11947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0701 05:06:57.363518   11947 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 05:06:57.363532   11947 start.go:494] detecting cgroup driver to use...
	I0701 05:06:57.363611   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 05:06:57.370452   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0701 05:06:57.373775   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 05:06:57.376578   11947 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 05:06:57.376602   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 05:06:57.379550   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 05:06:57.382971   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 05:06:57.386503   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 05:06:57.389772   11947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 05:06:57.392488   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 05:06:57.395458   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 05:06:57.398870   11947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 05:06:57.402261   11947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 05:06:57.404817   11947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 05:06:57.407571   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:57.489974   11947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 05:06:57.495643   11947 start.go:494] detecting cgroup driver to use...
	I0701 05:06:57.495698   11947 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 05:06:57.502003   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 05:06:57.510567   11947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 05:06:57.516460   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 05:06:57.520807   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 05:06:57.525116   11947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 05:06:57.584571   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 05:06:57.589738   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 05:06:57.595080   11947 ssh_runner.go:195] Run: which cri-dockerd
	I0701 05:06:57.596407   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 05:06:57.599144   11947 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 05:06:57.604351   11947 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 05:06:57.677606   11947 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 05:06:57.748345   11947 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 05:06:57.748399   11947 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 05:06:57.754673   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:57.823854   11947 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 05:06:58.985222   11947 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161355584s)
	I0701 05:06:58.985284   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 05:06:58.991654   11947 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 05:06:58.997945   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 05:06:59.002498   11947 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 05:06:59.085024   11947 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 05:06:59.165448   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:59.243195   11947 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 05:06:59.249662   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 05:06:59.254257   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:06:59.317881   11947 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 05:06:59.358091   11947 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 05:06:59.358164   11947 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 05:06:59.360281   11947 start.go:562] Will wait 60s for crictl version
	I0701 05:06:59.360315   11947 ssh_runner.go:195] Run: which crictl
	I0701 05:06:59.361559   11947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 05:06:59.376843   11947 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0701 05:06:59.376911   11947 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 05:06:59.393550   11947 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 05:06:59.412401   11947 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0701 05:06:59.412464   11947 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0701 05:06:59.413748   11947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 05:06:59.417203   11947 kubeadm.go:877] updating cluster {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0701 05:06:59.417246   11947 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0701 05:06:59.417284   11947 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 05:06:59.427632   11947 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 05:06:59.427640   11947 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0701 05:06:59.427686   11947 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 05:06:59.431029   11947 ssh_runner.go:195] Run: which lz4
	I0701 05:06:59.432219   11947 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0701 05:06:59.433425   11947 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0701 05:06:59.433435   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0701 05:07:00.374915   11947 docker.go:649] duration metric: took 942.731292ms to copy over tarball
	I0701 05:07:00.374973   11947 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 05:07:01.558430   11947 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1834495s)
	I0701 05:07:01.558455   11947 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 05:07:01.574562   11947 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 05:07:01.577619   11947 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0701 05:07:01.582495   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:07:01.662621   11947 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 05:07:03.278701   11947 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.616067458s)
	I0701 05:07:03.278789   11947 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 05:07:03.297382   11947 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 05:07:03.297393   11947 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0701 05:07:03.297398   11947 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0701 05:07:03.303373   11947 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.305287   11947 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.307277   11947 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.307341   11947 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0701 05:07:03.308898   11947 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.308970   11947 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.310230   11947 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0701 05:07:03.310388   11947 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.311852   11947 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.311865   11947 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.313241   11947 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.313327   11947 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.314273   11947 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.314295   11947 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.315169   11947 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.316058   11947 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.695491   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0701 05:07:03.695959   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.704586   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.707982   11947 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0701 05:07:03.708006   11947 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0701 05:07:03.708056   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0701 05:07:03.712800   11947 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0701 05:07:03.712822   11947 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.712871   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0701 05:07:03.723160   11947 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0701 05:07:03.723197   11947 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.723296   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0701 05:07:03.734786   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0701 05:07:03.735057   11947 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0701 05:07:03.735959   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0701 05:07:03.744293   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0701 05:07:03.744321   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0701 05:07:03.744493   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0701 05:07:03.744596   11947 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0701 05:07:03.747638   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0701 05:07:03.747661   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0701 05:07:03.762441   11947 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0701 05:07:03.762466   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0701 05:07:03.768132   11947 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0701 05:07:03.768260   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.780065   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.781983   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.803667   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.858138   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0701 05:07:03.858166   11947 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0701 05:07:03.858188   11947 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.858248   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0701 05:07:03.858249   11947 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0701 05:07:03.858296   11947 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.858325   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0701 05:07:03.862664   11947 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0701 05:07:03.862687   11947 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.862667   11947 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0701 05:07:03.862708   11947 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.862750   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0701 05:07:03.862750   11947 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0701 05:07:03.889840   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0701 05:07:03.899867   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0701 05:07:03.899994   11947 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0701 05:07:03.906579   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0701 05:07:03.906590   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0701 05:07:03.917072   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0701 05:07:03.917111   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0701 05:07:03.935893   11947 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0701 05:07:03.936016   11947 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.979053   11947 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0701 05:07:03.979076   11947 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:03.979128   11947 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:07:04.008109   11947 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0701 05:07:04.008127   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0701 05:07:04.027094   11947 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 05:07:04.027221   11947 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0701 05:07:04.106231   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0701 05:07:04.106239   11947 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0701 05:07:04.106266   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0701 05:07:04.114255   11947 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0701 05:07:04.114269   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0701 05:07:04.275123   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0701 05:07:04.275158   11947 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0701 05:07:04.275209   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0701 05:07:04.516973   11947 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0701 05:07:04.517013   11947 cache_images.go:92] duration metric: took 1.219612333s to LoadCachedImages
	W0701 05:07:04.517054   11947 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0701 05:07:04.517061   11947 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0701 05:07:04.517116   11947 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-841000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 05:07:04.517176   11947 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 05:07:04.530828   11947 cni.go:84] Creating CNI manager for ""
	I0701 05:07:04.530841   11947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:07:04.530846   11947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 05:07:04.530854   11947 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-841000 NodeName:stopped-upgrade-841000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 05:07:04.530923   11947 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-841000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 05:07:04.530977   11947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0701 05:07:04.534578   11947 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 05:07:04.534618   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 05:07:04.537342   11947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0701 05:07:04.542053   11947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 05:07:04.547027   11947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0701 05:07:04.552583   11947 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0701 05:07:04.553704   11947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 05:07:04.557215   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:07:04.634852   11947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:07:04.641045   11947 certs.go:68] Setting up /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000 for IP: 10.0.2.15
	I0701 05:07:04.641057   11947 certs.go:194] generating shared ca certs ...
	I0701 05:07:04.641066   11947 certs.go:226] acquiring lock for ca certs: {Name:mkd4046b456c87b80b2e6f34890c01f767ca15e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.641241   11947 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.key
	I0701 05:07:04.641292   11947 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.key
	I0701 05:07:04.641299   11947 certs.go:256] generating profile certs ...
	I0701 05:07:04.641382   11947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.key
	I0701 05:07:04.641400   11947 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301
	I0701 05:07:04.641423   11947 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0701 05:07:04.765449   11947 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 ...
	I0701 05:07:04.765464   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301: {Name:mkd89e4947fa3c5d3ba4b598d83619c33a5b2c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.769882   11947 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 ...
	I0701 05:07:04.769891   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301: {Name:mk2fda541721dec72ff3d6d7d66d18f65003a0f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.770028   11947 certs.go:381] copying /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt
	I0701 05:07:04.770174   11947 certs.go:385] copying /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key
	I0701 05:07:04.770335   11947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/proxy-client.key
	I0701 05:07:04.770467   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003.pem (1338 bytes)
	W0701 05:07:04.770496   11947 certs.go:480] ignoring /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003_empty.pem, impossibly tiny 0 bytes
	I0701 05:07:04.770501   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 05:07:04.770528   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem (1082 bytes)
	I0701 05:07:04.770550   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem (1123 bytes)
	I0701 05:07:04.770576   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/key.pem (1679 bytes)
	I0701 05:07:04.770624   11947 certs.go:484] found cert: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem (1708 bytes)
	I0701 05:07:04.770962   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 05:07:04.778649   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 05:07:04.786271   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 05:07:04.793263   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0701 05:07:04.799909   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0701 05:07:04.806777   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 05:07:04.813419   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 05:07:04.820363   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 05:07:04.827257   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/ssl/certs/100032.pem --> /usr/share/ca-certificates/100032.pem (1708 bytes)
	I0701 05:07:04.834397   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 05:07:04.840880   11947 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/10003.pem --> /usr/share/ca-certificates/10003.pem (1338 bytes)
	I0701 05:07:04.847614   11947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 05:07:04.852938   11947 ssh_runner.go:195] Run: openssl version
	I0701 05:07:04.854761   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100032.pem && ln -fs /usr/share/ca-certificates/100032.pem /etc/ssl/certs/100032.pem"
	I0701 05:07:04.858176   11947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100032.pem
	I0701 05:07:04.859600   11947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 11:50 /usr/share/ca-certificates/100032.pem
	I0701 05:07:04.859619   11947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100032.pem
	I0701 05:07:04.861392   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100032.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 05:07:04.864155   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 05:07:04.867365   11947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:07:04.868881   11947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:03 /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:07:04.868901   11947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 05:07:04.870625   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 05:07:04.873743   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10003.pem && ln -fs /usr/share/ca-certificates/10003.pem /etc/ssl/certs/10003.pem"
	I0701 05:07:04.876485   11947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10003.pem
	I0701 05:07:04.877856   11947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 11:50 /usr/share/ca-certificates/10003.pem
	I0701 05:07:04.877877   11947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10003.pem
	I0701 05:07:04.879648   11947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10003.pem /etc/ssl/certs/51391683.0"
	I0701 05:07:04.883023   11947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 05:07:04.884620   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 05:07:04.886768   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 05:07:04.888850   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 05:07:04.890924   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 05:07:04.892710   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 05:07:04.894415   11947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 05:07:04.896565   11947 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0701 05:07:04.896638   11947 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 05:07:04.906593   11947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 05:07:04.909570   11947 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 05:07:04.909576   11947 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 05:07:04.909579   11947 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 05:07:04.909602   11947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 05:07:04.912291   11947 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 05:07:04.912593   11947 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-841000" does not appear in /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:07:04.912692   11947 kubeconfig.go:62] /Users/jenkins/minikube-integration/19166-9507/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-841000" cluster setting kubeconfig missing "stopped-upgrade-841000" context setting]
	I0701 05:07:04.912889   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:07:04.913328   11947 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105d4d090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:07:04.913676   11947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 05:07:04.916187   11947 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-841000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0701 05:07:04.916193   11947 kubeadm.go:1154] stopping kube-system containers ...
	I0701 05:07:04.916229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 05:07:04.926935   11947 docker.go:483] Stopping containers: [b42d072377e4 d5dd8fab773c 4290f4ea2713 6093aa79356b 4fa696cbe259 164948541ac9 6bb114ebadf6 61acb4180c04]
	I0701 05:07:04.927008   11947 ssh_runner.go:195] Run: docker stop b42d072377e4 d5dd8fab773c 4290f4ea2713 6093aa79356b 4fa696cbe259 164948541ac9 6bb114ebadf6 61acb4180c04
	I0701 05:07:04.937074   11947 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 05:07:04.942651   11947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:07:04.945567   11947 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 05:07:04.945572   11947 kubeadm.go:156] found existing configuration files:
	
	I0701 05:07:04.945597   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0701 05:07:04.947887   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 05:07:04.947905   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:07:04.950983   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0701 05:07:04.954059   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 05:07:04.954078   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:07:04.956696   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0701 05:07:04.959243   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 05:07:04.959266   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:07:04.962532   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0701 05:07:04.965165   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 05:07:04.965190   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:07:04.967590   11947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:07:04.970701   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:04.994726   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.591930   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.720534   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.746465   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 05:07:05.776673   11947 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:07:05.776756   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:06.278810   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:06.778784   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:07:06.783095   11947 api_server.go:72] duration metric: took 1.006427708s to wait for apiserver process to appear ...
	I0701 05:07:06.783105   11947 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:07:06.783120   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:11.785236   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:11.785264   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:16.785545   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:16.785590   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:21.786045   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:21.786094   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:26.786790   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:26.786856   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:31.787888   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:31.787921   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:36.788965   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:36.789025   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:41.790578   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:41.790636   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:46.792401   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:46.792424   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:51.794226   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:51.794269   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:07:56.796081   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:07:56.796171   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:01.798625   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:01.798650   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:06.800274   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:06.800456   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:06.818301   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:06.818397   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:06.832457   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:06.832526   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:06.844037   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:06.844120   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:06.854940   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:06.854998   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:06.865273   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:06.865347   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:06.875497   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:06.875565   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:06.885225   11947 logs.go:276] 0 containers: []
	W0701 05:08:06.885239   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:06.885295   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:06.903731   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:06.903748   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:06.903755   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:06.930909   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:06.930922   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:06.946517   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:06.946528   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:06.958821   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:06.958831   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:06.975952   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:06.975964   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:06.992692   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:06.992702   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:07.006758   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:07.006769   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:07.018579   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:07.018590   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:07.030013   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:07.030023   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:07.054872   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:07.054881   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:07.090985   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:07.090995   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:07.095108   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:07.095114   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:07.112922   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:07.112932   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:07.128897   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:07.128907   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:07.144463   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:07.144474   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:07.254423   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:07.254434   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:09.782963   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:14.784924   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:14.785107   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:14.797952   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:14.798025   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:14.808443   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:14.808511   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:14.819155   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:14.819229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:14.829597   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:14.829663   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:14.840008   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:14.840086   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:14.850734   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:14.850802   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:14.861378   11947 logs.go:276] 0 containers: []
	W0701 05:08:14.861389   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:14.861444   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:14.872183   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:14.872203   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:14.872208   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:14.876804   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:14.876813   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:14.891265   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:14.891275   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:14.905022   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:14.905031   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:14.922284   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:14.922294   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:14.946998   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:14.947006   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:14.958027   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:14.958037   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:14.969790   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:14.969802   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:15.007384   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:15.007394   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:15.032467   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:15.032476   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:15.050534   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:15.050543   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:15.069408   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:15.069421   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:15.084358   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:15.084367   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:15.121296   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:15.121306   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:15.136011   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:15.136025   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:15.147884   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:15.147898   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:17.662083   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:22.664384   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:22.664797   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:22.702538   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:22.702673   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:22.723065   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:22.723181   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:22.738357   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:22.738436   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:22.752851   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:22.752921   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:22.763364   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:22.763434   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:22.773771   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:22.773850   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:22.784104   11947 logs.go:276] 0 containers: []
	W0701 05:08:22.784115   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:22.784172   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:22.796759   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:22.796777   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:22.796783   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:22.811104   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:22.811114   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:22.822480   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:22.822492   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:22.834795   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:22.834807   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:22.846305   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:22.846317   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:22.850356   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:22.850366   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:22.874763   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:22.874774   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:22.893774   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:22.893784   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:22.904635   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:22.904645   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:22.922872   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:22.922883   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:22.934928   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:22.934940   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:22.972909   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:22.972917   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:23.007556   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:23.007568   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:23.021901   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:23.021913   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:23.046772   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:23.046779   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:23.061350   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:23.061360   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:25.581692   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:30.584116   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:30.584201   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:30.595609   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:30.595696   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:30.605990   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:30.606057   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:30.616742   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:30.616811   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:30.627351   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:30.627425   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:30.637111   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:30.637180   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:30.647755   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:30.647820   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:30.658046   11947 logs.go:276] 0 containers: []
	W0701 05:08:30.658058   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:30.658117   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:30.668284   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:30.668302   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:30.668308   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:30.679907   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:30.679920   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:30.697276   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:30.697286   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:30.711902   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:30.711910   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:30.736096   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:30.736106   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:30.753118   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:30.753128   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:30.777645   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:30.777652   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:30.813059   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:30.813074   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:30.827168   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:30.827182   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:30.838767   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:30.838780   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:30.853077   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:30.853090   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:30.864317   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:30.864329   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:30.878011   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:30.878024   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:30.891770   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:30.891783   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:30.904032   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:30.904042   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:30.940659   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:30.940667   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:33.446584   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:38.448878   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:38.448981   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:38.459885   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:38.459955   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:38.470882   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:38.470953   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:38.481394   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:38.481460   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:38.491879   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:38.491951   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:38.502278   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:38.502354   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:38.513035   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:38.513103   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:38.523501   11947 logs.go:276] 0 containers: []
	W0701 05:08:38.523516   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:38.523572   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:38.533945   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:38.533964   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:38.533969   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:38.552225   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:38.552237   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:38.564004   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:38.564014   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:38.578232   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:38.578244   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:38.583183   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:38.583190   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:38.618596   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:38.618610   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:38.632879   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:38.632890   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:38.657981   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:38.657991   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:38.669724   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:38.669733   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:38.708132   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:38.708146   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:38.719910   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:38.719921   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:38.731550   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:38.731561   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:38.755223   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:38.755230   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:38.766919   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:38.766929   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:38.780953   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:38.780963   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:38.797384   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:38.797398   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:41.317047   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:46.319391   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:46.319634   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:46.346856   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:46.346975   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:46.361782   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:46.361858   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:46.373607   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:46.373682   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:46.384407   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:46.384478   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:46.396821   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:46.396886   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:46.407848   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:46.407924   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:46.418050   11947 logs.go:276] 0 containers: []
	W0701 05:08:46.418063   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:46.418130   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:46.428553   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:46.428572   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:46.428578   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:46.432585   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:46.432591   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:46.467547   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:46.467561   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:46.494049   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:46.494060   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:46.507977   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:46.507988   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:46.522364   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:46.522374   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:46.560162   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:46.560174   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:46.578366   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:46.578376   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:46.602563   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:46.602574   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:46.617003   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:46.617014   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:46.641369   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:46.641378   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:46.656789   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:46.656805   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:46.669318   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:46.669328   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:46.680724   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:46.680733   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:46.691959   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:46.691970   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:46.703885   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:46.703897   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:49.223295   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:08:54.225697   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:08:54.225925   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:08:54.248177   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:08:54.248279   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:08:54.263963   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:08:54.264037   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:08:54.278729   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:08:54.278805   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:08:54.289089   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:08:54.289162   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:08:54.299197   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:08:54.299259   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:08:54.324160   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:08:54.324234   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:08:54.336778   11947 logs.go:276] 0 containers: []
	W0701 05:08:54.336788   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:08:54.336844   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:08:54.347357   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:08:54.347375   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:08:54.347381   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:08:54.368543   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:08:54.368553   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:08:54.380309   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:08:54.380322   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:08:54.394556   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:08:54.394570   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:08:54.406498   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:08:54.406512   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:08:54.418501   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:08:54.418512   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:08:54.433738   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:08:54.433749   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:08:54.448323   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:08:54.448333   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:08:54.462905   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:08:54.462916   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:08:54.499235   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:08:54.499243   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:08:54.513000   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:08:54.513012   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:08:54.537610   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:08:54.537621   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:08:54.549978   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:08:54.549991   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:08:54.575443   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:08:54.575454   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:08:54.587419   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:08:54.587429   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:08:54.591659   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:08:54.591665   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:08:57.128718   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:02.130130   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:02.130345   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:02.156153   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:02.156249   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:02.175573   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:02.175657   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:02.188429   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:02.188510   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:02.199652   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:02.199738   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:02.210851   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:02.210920   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:02.222004   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:02.222076   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:02.232535   11947 logs.go:276] 0 containers: []
	W0701 05:09:02.232551   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:02.232609   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:02.243111   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:02.243133   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:02.243137   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:02.256862   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:02.256872   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:02.272246   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:02.272256   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:02.284525   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:02.284539   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:02.298084   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:02.298097   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:02.333798   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:02.333809   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:02.348845   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:02.348858   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:02.364321   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:02.364334   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:02.382942   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:02.382954   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:02.395261   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:02.395274   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:02.431378   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:02.431386   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:02.444929   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:02.444939   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:02.467754   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:02.467763   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:02.478815   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:02.478827   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:02.482985   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:02.482992   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:02.508252   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:02.508261   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:05.033003   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:10.033817   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:10.034128   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:10.068466   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:10.068634   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:10.089835   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:10.089927   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:10.103862   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:10.103937   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:10.115843   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:10.115917   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:10.127229   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:10.127299   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:10.139239   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:10.139304   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:10.157195   11947 logs.go:276] 0 containers: []
	W0701 05:09:10.157209   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:10.157267   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:10.167561   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:10.167578   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:10.167583   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:10.182161   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:10.182170   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:10.205909   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:10.205922   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:10.217937   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:10.217948   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:10.229776   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:10.229787   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:10.243539   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:10.243550   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:10.258850   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:10.258866   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:10.272634   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:10.272645   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:10.309688   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:10.309707   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:10.346453   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:10.346469   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:10.361634   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:10.361644   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:10.373190   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:10.373200   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:10.395622   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:10.395630   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:10.406884   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:10.406897   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:10.411396   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:10.411403   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:10.428518   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:10.428532   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:12.942850   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:17.945570   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:17.945702   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:17.957726   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:17.957797   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:17.968366   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:17.968451   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:17.978796   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:17.978857   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:17.989750   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:17.989817   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:18.000207   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:18.000264   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:18.010925   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:18.010997   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:18.021264   11947 logs.go:276] 0 containers: []
	W0701 05:09:18.021275   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:18.021334   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:18.031402   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:18.031416   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:18.031421   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:18.048296   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:18.048306   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:18.073640   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:18.073650   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:18.084985   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:18.085001   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:18.099770   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:18.099783   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:18.118539   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:18.118553   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:18.123229   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:18.123235   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:18.140566   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:18.140579   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:18.154049   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:18.154059   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:18.168371   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:18.168385   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:18.201202   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:18.201213   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:18.217794   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:18.217806   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:18.229839   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:18.229854   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:18.266958   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:18.266970   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:18.291247   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:18.291259   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:18.302613   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:18.302627   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:20.829771   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:25.832096   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:25.832257   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:25.844635   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:25.844709   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:25.862251   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:25.862322   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:25.872348   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:25.872413   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:25.882765   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:25.882836   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:25.893095   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:25.893163   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:25.903807   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:25.903878   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:25.917562   11947 logs.go:276] 0 containers: []
	W0701 05:09:25.917572   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:25.917626   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:25.927986   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:25.928012   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:25.928017   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:25.939298   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:25.939309   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:25.950969   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:25.950979   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:25.967125   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:25.967136   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:25.980827   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:25.980836   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:26.004812   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:26.004823   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:26.016343   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:26.016354   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:26.020441   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:26.020450   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:26.045184   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:26.045194   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:26.058844   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:26.058854   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:26.076319   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:26.076329   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:26.091153   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:26.091163   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:26.109414   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:26.109427   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:26.126117   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:26.126127   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:26.162660   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:26.162668   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:26.198116   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:26.198127   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:28.711802   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:33.714095   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:33.714330   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:33.740632   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:33.740759   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:33.758576   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:33.758670   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:33.771959   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:33.772032   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:33.785074   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:33.785150   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:33.796417   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:33.796487   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:33.807374   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:33.807441   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:33.817663   11947 logs.go:276] 0 containers: []
	W0701 05:09:33.817675   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:33.817744   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:33.828774   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:33.828791   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:33.828797   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:33.865779   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:33.865788   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:33.879714   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:33.879724   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:33.895077   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:33.895086   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:33.906759   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:33.906771   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:33.917889   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:33.917900   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:33.929295   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:33.929305   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:33.951770   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:33.951777   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:33.963034   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:33.963044   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:33.977861   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:33.977870   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:33.999357   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:33.999367   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:34.013698   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:34.013709   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:34.018121   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:34.018128   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:34.053676   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:34.053687   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:34.068347   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:34.068357   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:34.094611   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:34.094622   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:36.612317   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:41.614690   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:41.615036   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:41.644617   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:41.644746   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:41.672187   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:41.672261   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:41.684519   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:41.684591   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:41.695625   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:41.695708   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:41.706528   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:41.706588   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:41.717122   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:41.717182   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:41.727439   11947 logs.go:276] 0 containers: []
	W0701 05:09:41.727453   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:41.727507   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:41.738229   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:41.738246   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:41.738251   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:41.763424   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:41.763435   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:41.775556   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:41.775568   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:41.787209   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:41.787220   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:41.823970   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:41.823981   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:41.840806   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:41.840819   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:41.852050   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:41.852062   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:41.865424   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:41.865439   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:41.879252   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:41.879264   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:41.891382   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:41.891392   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:41.908971   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:41.908982   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:41.913087   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:41.913094   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:41.926937   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:41.926947   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:41.942203   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:41.942212   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:41.956104   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:41.956112   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:41.978528   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:41.978537   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:44.515312   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:49.517732   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:49.518105   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:49.559095   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:49.559229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:49.580894   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:49.580978   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:49.595787   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:49.595865   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:49.609196   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:49.609266   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:49.620591   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:49.620656   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:49.631496   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:49.631568   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:49.642020   11947 logs.go:276] 0 containers: []
	W0701 05:09:49.642035   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:49.642100   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:49.654937   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:49.654953   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:49.654959   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:49.667299   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:49.667310   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:49.679309   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:49.679318   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:49.691099   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:49.691112   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:49.706438   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:49.706449   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:49.721244   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:49.721253   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:49.735036   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:49.735047   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:09:49.752689   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:49.752699   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:49.766405   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:49.766415   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:49.790982   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:49.790992   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:49.802779   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:49.802791   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:49.806931   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:49.806938   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:49.841340   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:49.841351   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:49.853184   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:49.853197   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:49.868513   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:49.868523   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:49.907205   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:49.907218   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:52.436907   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:09:57.439347   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:09:57.439674   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:09:57.475083   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:09:57.475230   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:09:57.497335   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:09:57.497430   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:09:57.517997   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:09:57.518071   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:09:57.529562   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:09:57.529636   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:09:57.541970   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:09:57.542033   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:09:57.553461   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:09:57.553530   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:09:57.563959   11947 logs.go:276] 0 containers: []
	W0701 05:09:57.563975   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:09:57.564034   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:09:57.574525   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:09:57.574542   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:09:57.574548   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:09:57.579341   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:09:57.579348   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:09:57.604304   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:09:57.604314   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:09:57.628441   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:09:57.628449   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:09:57.639723   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:09:57.639732   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:09:57.677245   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:09:57.677259   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:09:57.691259   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:09:57.691269   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:09:57.706468   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:09:57.706480   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:09:57.718732   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:09:57.718743   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:09:57.732503   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:09:57.732515   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:09:57.743965   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:09:57.743974   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:09:57.778767   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:09:57.778778   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:09:57.794891   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:09:57.794901   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:09:57.809787   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:09:57.809801   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:09:57.821327   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:09:57.821337   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:09:57.833728   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:09:57.833737   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:00.351831   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:05.354145   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:05.354345   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:05.370718   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:05.370806   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:05.384884   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:05.384959   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:05.396344   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:05.396408   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:05.407297   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:05.407363   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:05.418007   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:05.418069   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:05.428594   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:05.428663   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:05.438646   11947 logs.go:276] 0 containers: []
	W0701 05:10:05.438656   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:05.438709   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:05.449304   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:05.449319   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:05.449324   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:05.472877   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:05.472883   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:05.484953   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:05.484964   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:05.519816   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:05.519827   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:05.533929   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:05.533939   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:05.548916   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:05.548927   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:05.563668   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:05.563680   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:05.568015   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:05.568023   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:05.582682   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:05.582693   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:05.594954   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:05.594965   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:05.610657   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:05.610667   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:05.623071   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:05.623082   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:05.641812   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:05.641823   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:05.680226   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:05.680243   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:05.705855   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:05.705867   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:05.718042   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:05.718052   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:08.234195   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:13.235051   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:13.235245   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:13.258877   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:13.258969   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:13.273825   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:13.273897   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:13.286314   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:13.286378   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:13.298004   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:13.298074   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:13.309487   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:13.309553   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:13.324108   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:13.324170   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:13.334548   11947 logs.go:276] 0 containers: []
	W0701 05:10:13.334563   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:13.334622   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:13.344858   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:13.344876   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:13.344882   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:13.356244   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:13.356256   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:13.373586   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:13.373596   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:13.397599   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:13.397608   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:13.434897   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:13.434905   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:13.449166   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:13.449177   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:13.463209   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:13.463219   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:13.467836   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:13.467844   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:13.483720   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:13.483730   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:13.495996   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:13.496008   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:13.511733   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:13.511747   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:13.526046   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:13.526056   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:13.537134   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:13.537143   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:13.548819   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:13.548830   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:13.584290   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:13.584302   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:13.609865   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:13.609878   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:16.121980   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:21.124207   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:21.124395   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:21.139125   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:21.139202   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:21.150380   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:21.150450   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:21.161200   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:21.161267   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:21.171220   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:21.171291   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:21.181663   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:21.181730   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:21.192319   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:21.192385   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:21.204979   11947 logs.go:276] 0 containers: []
	W0701 05:10:21.204989   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:21.205043   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:21.215741   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:21.215760   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:21.215765   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:21.219965   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:21.219973   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:21.240502   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:21.240513   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:21.253859   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:21.253874   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:21.277508   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:21.277518   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:21.315004   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:21.315013   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:21.350768   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:21.350779   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:21.365293   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:21.365303   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:21.380233   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:21.380245   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:21.400876   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:21.400885   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:21.413726   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:21.413735   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:21.442710   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:21.442719   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:21.454635   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:21.454647   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:21.466296   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:21.466307   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:21.477806   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:21.477820   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:21.492144   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:21.492155   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:24.007882   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:29.009216   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:29.009342   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:29.021822   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:29.021891   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:29.033021   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:29.033093   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:29.046313   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:29.046386   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:29.057170   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:29.057236   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:29.067820   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:29.067887   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:29.078398   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:29.078462   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:29.088667   11947 logs.go:276] 0 containers: []
	W0701 05:10:29.088679   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:29.088737   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:29.099875   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:29.099894   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:29.099899   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:29.138042   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:29.138059   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:29.153062   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:29.153073   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:29.164946   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:29.164956   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:29.186221   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:29.186232   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:29.199993   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:29.200002   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:29.211362   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:29.211372   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:29.215619   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:29.215626   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:29.238342   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:29.238351   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:29.252175   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:29.252185   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:29.276335   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:29.276350   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:29.287575   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:29.287587   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:29.311558   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:29.311568   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:29.345750   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:29.345761   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:29.361679   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:29.361695   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:29.373098   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:29.373108   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:31.889083   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:36.891473   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:36.891677   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:36.911913   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:36.912008   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:36.927432   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:36.927512   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:36.939276   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:36.939345   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:36.950173   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:36.950246   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:36.960606   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:36.960673   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:36.974549   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:36.974616   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:36.985293   11947 logs.go:276] 0 containers: []
	W0701 05:10:36.985304   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:36.985365   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:36.995477   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:36.995493   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:36.995499   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:36.999776   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:36.999783   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:37.013711   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:37.013721   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:37.028558   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:37.028569   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:37.051317   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:37.051328   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:37.062693   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:37.062704   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:37.097408   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:37.097418   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:37.111505   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:37.111519   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:37.135857   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:37.135868   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:37.156818   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:37.156830   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:37.174290   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:37.174300   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:37.213565   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:37.213577   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:37.228468   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:37.228482   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:37.242118   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:37.242128   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:37.257428   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:37.257438   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:37.268383   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:37.268393   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:39.780534   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:44.782553   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:44.782711   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:44.799181   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:44.799259   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:44.810281   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:44.810341   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:44.820727   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:44.820799   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:44.830800   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:44.830870   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:44.841342   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:44.841413   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:44.851398   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:44.851463   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:44.862040   11947 logs.go:276] 0 containers: []
	W0701 05:10:44.862051   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:44.862103   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:44.872275   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:44.872292   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:44.872298   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:44.888532   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:44.888545   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:44.902022   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:44.902034   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:44.913836   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:44.913850   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:44.918275   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:44.918283   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:44.952895   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:44.952904   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:44.965144   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:44.965159   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:44.986758   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:44.986766   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:45.022377   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:45.022387   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:45.039603   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:45.039616   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:45.051307   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:45.051318   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:45.065922   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:45.065934   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:45.077415   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:45.077428   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:45.096450   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:45.096460   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:45.121942   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:45.121954   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:45.136596   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:45.136606   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:47.653074   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:10:52.655490   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:10:52.655816   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:10:52.696076   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:10:52.696206   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:10:52.714729   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:10:52.714827   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:10:52.732074   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:10:52.732165   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:10:52.744744   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:10:52.744812   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:10:52.755348   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:10:52.755417   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:10:52.765815   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:10:52.765880   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:10:52.776215   11947 logs.go:276] 0 containers: []
	W0701 05:10:52.776226   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:10:52.776276   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:10:52.787392   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:10:52.787411   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:10:52.787417   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:10:52.806775   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:10:52.806791   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:10:52.821130   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:10:52.821143   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:10:52.846087   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:10:52.846098   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:10:52.860779   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:10:52.860788   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:10:52.874954   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:10:52.874965   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:10:52.887526   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:10:52.887538   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:10:52.921095   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:10:52.921109   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:10:52.937010   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:10:52.937021   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:10:52.951841   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:10:52.951851   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:10:52.990739   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:10:52.990747   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:10:52.995111   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:10:52.995118   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:10:53.010289   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:10:53.010300   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:10:53.022360   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:10:53.022372   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:10:53.037568   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:10:53.037580   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:10:53.049873   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:10:53.049885   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:10:55.575684   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:00.577861   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:00.578008   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:11:00.594488   11947 logs.go:276] 2 containers: [c34d261a49e8 4290f4ea2713]
	I0701 05:11:00.594574   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:11:00.609258   11947 logs.go:276] 2 containers: [10b407062ce8 d5dd8fab773c]
	I0701 05:11:00.609330   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:11:00.620026   11947 logs.go:276] 1 containers: [0a08294f23a8]
	I0701 05:11:00.620089   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:11:00.630938   11947 logs.go:276] 2 containers: [6e97c588e33b b42d072377e4]
	I0701 05:11:00.631011   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:11:00.644847   11947 logs.go:276] 1 containers: [c9e601a2f02b]
	I0701 05:11:00.644915   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:11:00.654921   11947 logs.go:276] 2 containers: [3741cba0791a 6093aa79356b]
	I0701 05:11:00.654986   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:11:00.664738   11947 logs.go:276] 0 containers: []
	W0701 05:11:00.664748   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:11:00.664798   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:11:00.676978   11947 logs.go:276] 1 containers: [3306a91d87f8]
	I0701 05:11:00.676996   11947 logs.go:123] Gathering logs for kube-controller-manager [3741cba0791a] ...
	I0701 05:11:00.677003   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3741cba0791a"
	I0701 05:11:00.693783   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:11:00.693793   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:11:00.717670   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:11:00.717680   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:11:00.729894   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:11:00.729904   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:11:00.734689   11947 logs.go:123] Gathering logs for kube-scheduler [6e97c588e33b] ...
	I0701 05:11:00.734696   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e97c588e33b"
	I0701 05:11:00.746685   11947 logs.go:123] Gathering logs for kube-scheduler [b42d072377e4] ...
	I0701 05:11:00.746696   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b42d072377e4"
	I0701 05:11:00.772607   11947 logs.go:123] Gathering logs for storage-provisioner [3306a91d87f8] ...
	I0701 05:11:00.772617   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3306a91d87f8"
	I0701 05:11:00.784408   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:11:00.784418   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:11:00.821466   11947 logs.go:123] Gathering logs for kube-apiserver [c34d261a49e8] ...
	I0701 05:11:00.821477   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34d261a49e8"
	I0701 05:11:00.836137   11947 logs.go:123] Gathering logs for etcd [10b407062ce8] ...
	I0701 05:11:00.836147   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b407062ce8"
	I0701 05:11:00.850119   11947 logs.go:123] Gathering logs for etcd [d5dd8fab773c] ...
	I0701 05:11:00.850129   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5dd8fab773c"
	I0701 05:11:00.864886   11947 logs.go:123] Gathering logs for kube-controller-manager [6093aa79356b] ...
	I0701 05:11:00.864896   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093aa79356b"
	I0701 05:11:00.878396   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:11:00.878406   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:11:00.916825   11947 logs.go:123] Gathering logs for kube-apiserver [4290f4ea2713] ...
	I0701 05:11:00.916835   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4290f4ea2713"
	I0701 05:11:00.943459   11947 logs.go:123] Gathering logs for coredns [0a08294f23a8] ...
	I0701 05:11:00.943473   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a08294f23a8"
	I0701 05:11:00.954760   11947 logs.go:123] Gathering logs for kube-proxy [c9e601a2f02b] ...
	I0701 05:11:00.954774   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e601a2f02b"
	I0701 05:11:03.470205   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:08.472484   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:08.472559   11947 kubeadm.go:591] duration metric: took 4m3.564007333s to restartPrimaryControlPlane
	W0701 05:11:08.472620   11947 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0701 05:11:08.472643   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0701 05:11:09.509165   11947 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036515375s)
	I0701 05:11:09.509240   11947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 05:11:09.514197   11947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 05:11:09.516966   11947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 05:11:09.519502   11947 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 05:11:09.519508   11947 kubeadm.go:156] found existing configuration files:
	
	I0701 05:11:09.519533   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0701 05:11:09.521906   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 05:11:09.521927   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 05:11:09.524582   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0701 05:11:09.527146   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 05:11:09.527175   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 05:11:09.530782   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0701 05:11:09.533697   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 05:11:09.533814   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 05:11:09.536822   11947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0701 05:11:09.539417   11947 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 05:11:09.539439   11947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 05:11:09.542158   11947 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0701 05:11:09.559565   11947 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0701 05:11:09.559703   11947 kubeadm.go:309] [preflight] Running pre-flight checks
	I0701 05:11:09.610518   11947 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 05:11:09.610572   11947 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 05:11:09.610615   11947 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 05:11:09.661299   11947 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 05:11:09.665540   11947 out.go:204]   - Generating certificates and keys ...
	I0701 05:11:09.665576   11947 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0701 05:11:09.665610   11947 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0701 05:11:09.665647   11947 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0701 05:11:09.665685   11947 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0701 05:11:09.665718   11947 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0701 05:11:09.665756   11947 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0701 05:11:09.665788   11947 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0701 05:11:09.665825   11947 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0701 05:11:09.665864   11947 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0701 05:11:09.665901   11947 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0701 05:11:09.665918   11947 kubeadm.go:309] [certs] Using the existing "sa" key
	I0701 05:11:09.665949   11947 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 05:11:09.694473   11947 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 05:11:09.848613   11947 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 05:11:10.178440   11947 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 05:11:10.416487   11947 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 05:11:10.446926   11947 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 05:11:10.448401   11947 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 05:11:10.448424   11947 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0701 05:11:10.546518   11947 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 05:11:10.553435   11947 out.go:204]   - Booting up control plane ...
	I0701 05:11:10.553558   11947 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 05:11:10.553606   11947 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 05:11:10.553643   11947 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 05:11:10.553763   11947 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 05:11:10.553848   11947 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 05:11:15.050542   11947 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504518 seconds
	I0701 05:11:15.050600   11947 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 05:11:15.054073   11947 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 05:11:15.570379   11947 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 05:11:15.570482   11947 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 05:11:16.075022   11947 kubeadm.go:309] [bootstrap-token] Using token: vt4d8y.l8stakfyrhjy34q0
	I0701 05:11:16.079249   11947 out.go:204]   - Configuring RBAC rules ...
	I0701 05:11:16.079309   11947 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 05:11:16.079365   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 05:11:16.085680   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 05:11:16.086725   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 05:11:16.087614   11947 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 05:11:16.088452   11947 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 05:11:16.091737   11947 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 05:11:16.250837   11947 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0701 05:11:16.478676   11947 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0701 05:11:16.479337   11947 kubeadm.go:309] 
	I0701 05:11:16.479369   11947 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0701 05:11:16.479371   11947 kubeadm.go:309] 
	I0701 05:11:16.479420   11947 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0701 05:11:16.479424   11947 kubeadm.go:309] 
	I0701 05:11:16.479436   11947 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0701 05:11:16.479466   11947 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 05:11:16.479502   11947 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 05:11:16.479525   11947 kubeadm.go:309] 
	I0701 05:11:16.479638   11947 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0701 05:11:16.479647   11947 kubeadm.go:309] 
	I0701 05:11:16.479708   11947 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 05:11:16.479720   11947 kubeadm.go:309] 
	I0701 05:11:16.479762   11947 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0701 05:11:16.479814   11947 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 05:11:16.479863   11947 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 05:11:16.479869   11947 kubeadm.go:309] 
	I0701 05:11:16.479907   11947 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 05:11:16.480010   11947 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0701 05:11:16.480043   11947 kubeadm.go:309] 
	I0701 05:11:16.480149   11947 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vt4d8y.l8stakfyrhjy34q0 \
	I0701 05:11:16.480238   11947 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 \
	I0701 05:11:16.480252   11947 kubeadm.go:309] 	--control-plane 
	I0701 05:11:16.480256   11947 kubeadm.go:309] 
	I0701 05:11:16.480301   11947 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0701 05:11:16.480305   11947 kubeadm.go:309] 
	I0701 05:11:16.480400   11947 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vt4d8y.l8stakfyrhjy34q0 \
	I0701 05:11:16.480468   11947 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7272984af11ea72dfc66ced44d2e729700629bab3eeb62bb340890f2c50dfe86 
	I0701 05:11:16.480536   11947 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 05:11:16.480548   11947 cni.go:84] Creating CNI manager for ""
	I0701 05:11:16.480555   11947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:11:16.484333   11947 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 05:11:16.491274   11947 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 05:11:16.494246   11947 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0701 05:11:16.499178   11947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 05:11:16.499220   11947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 05:11:16.499223   11947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-841000 minikube.k8s.io/updated_at=2024_07_01T05_11_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c minikube.k8s.io/name=stopped-upgrade-841000 minikube.k8s.io/primary=true
	I0701 05:11:16.537881   11947 ops.go:34] apiserver oom_adj: -16
	I0701 05:11:16.537958   11947 kubeadm.go:1107] duration metric: took 38.775417ms to wait for elevateKubeSystemPrivileges
	W0701 05:11:16.537974   11947 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0701 05:11:16.537979   11947 kubeadm.go:393] duration metric: took 4m11.6424805s to StartCluster
	I0701 05:11:16.537988   11947 settings.go:142] acquiring lock: {Name:mk8a5112b51a742a29c931ccf59ae86bde00a5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:11:16.538077   11947 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:11:16.538486   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/kubeconfig: {Name:mk4c90f67cab929310b408048010034fbad6f093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:11:16.538712   11947 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:11:16.538770   11947 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 05:11:16.538801   11947 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-841000"
	I0701 05:11:16.538813   11947 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-841000"
	W0701 05:11:16.538817   11947 addons.go:243] addon storage-provisioner should already be in state true
	I0701 05:11:16.538820   11947 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:11:16.538822   11947 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-841000"
	I0701 05:11:16.538854   11947 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-841000"
	I0701 05:11:16.538828   11947 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0701 05:11:16.539296   11947 retry.go:31] will retry after 1.032144928s: connect: dial unix /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/monitor: connect: connection refused
	I0701 05:11:16.539979   11947 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19166-9507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105d4d090), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 05:11:16.547496   11947 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-841000"
	W0701 05:11:16.547503   11947 addons.go:243] addon default-storageclass should already be in state true
	I0701 05:11:16.547514   11947 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0701 05:11:16.547564   11947 out.go:177] * Verifying Kubernetes components...
	I0701 05:11:16.548562   11947 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 05:11:16.548571   11947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 05:11:16.548587   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:11:16.551299   11947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 05:11:16.628436   11947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 05:11:16.633771   11947 api_server.go:52] waiting for apiserver process to appear ...
	I0701 05:11:16.633813   11947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 05:11:16.637658   11947 api_server.go:72] duration metric: took 98.937125ms to wait for apiserver process to appear ...
	I0701 05:11:16.637665   11947 api_server.go:88] waiting for apiserver healthz status ...
	I0701 05:11:16.637672   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:16.659408   11947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 05:11:17.577567   11947 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 05:11:17.581597   11947 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:11:17.581603   11947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 05:11:17.581611   11947 sshutil.go:53] new ssh client: &{IP:localhost Port:52333 SSHKeyPath:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0701 05:11:17.611451   11947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 05:11:21.650604   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:21.650647   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:26.660565   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:26.660621   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:31.667956   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:31.668003   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:36.673560   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:36.673603   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:41.677996   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:41.678037   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:46.681545   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:46.681587   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0701 05:11:47.051078   11947 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0701 05:11:47.055481   11947 out.go:177] * Enabled addons: storage-provisioner
	I0701 05:11:47.068392   11947 addons.go:510] duration metric: took 30.490796917s for enable addons: enabled=[storage-provisioner]
	I0701 05:11:51.684589   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:51.684612   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:11:56.687280   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:11:56.687301   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:01.689015   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:01.689054   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:06.691777   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:06.691822   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:11.694695   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:11.694740   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:16.697463   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:16.697583   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:12:16.708945   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:12:16.709021   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:12:16.731367   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:12:16.731454   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:12:16.746216   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:12:16.746287   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:12:16.757501   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:12:16.757573   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:12:16.771961   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:12:16.772043   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:12:16.782730   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:12:16.782806   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:12:16.793742   11947 logs.go:276] 0 containers: []
	W0701 05:12:16.793755   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:12:16.793814   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:12:16.805018   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:12:16.805035   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:12:16.805043   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:12:16.820762   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:12:16.820776   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:12:16.832442   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:12:16.832452   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:12:16.845394   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:12:16.845405   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:12:16.880214   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:12:16.880223   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:12:16.885026   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:12:16.885034   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:12:16.922520   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:12:16.922531   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:12:16.935008   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:12:16.935018   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:12:16.952337   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:12:16.952346   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:12:16.977913   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:12:16.977921   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:12:16.988946   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:12:16.988957   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:12:17.003377   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:12:17.003387   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:12:17.017112   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:12:17.017125   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:12:19.530984   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:24.532517   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:24.532580   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:12:24.545681   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:12:24.545768   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:12:24.556846   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:12:24.556918   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:12:24.567332   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:12:24.567414   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:12:24.577934   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:12:24.577998   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:12:24.589373   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:12:24.589446   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:12:24.601682   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:12:24.601737   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:12:24.612602   11947 logs.go:276] 0 containers: []
	W0701 05:12:24.612615   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:12:24.612671   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:12:24.623793   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:12:24.623805   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:12:24.623811   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:12:24.662745   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:12:24.662755   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:12:24.679390   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:12:24.679398   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:12:24.691180   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:12:24.691189   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:12:24.715416   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:12:24.715427   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:12:24.727979   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:12:24.727989   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:12:24.761735   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:12:24.761751   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:12:24.767161   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:12:24.767174   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:12:24.783407   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:12:24.783423   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:12:24.800738   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:12:24.800755   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:12:24.817184   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:12:24.817196   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:12:24.831010   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:12:24.831020   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:12:24.859724   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:12:24.859741   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:12:27.375235   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:32.378062   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:32.378144   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:12:32.388977   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:12:32.389044   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:12:32.403527   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:12:32.403593   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:12:32.414245   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:12:32.414311   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:12:32.424740   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:12:32.424805   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:12:32.435682   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:12:32.435750   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:12:32.445857   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:12:32.445919   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:12:32.456438   11947 logs.go:276] 0 containers: []
	W0701 05:12:32.456449   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:12:32.456506   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:12:32.472769   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:12:32.472781   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:12:32.472787   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:12:32.506933   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:12:32.506944   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:12:32.540224   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:12:32.540235   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:12:32.554375   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:12:32.554386   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:12:32.568463   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:12:32.568475   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:12:32.579632   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:12:32.579644   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:12:32.591959   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:12:32.591973   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:12:32.603597   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:12:32.603608   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:12:32.614942   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:12:32.614955   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:12:32.619056   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:12:32.619063   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:12:32.634292   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:12:32.634303   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:12:32.651181   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:12:32.651190   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:12:32.674200   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:12:32.674207   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:12:35.185833   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:40.186348   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:40.186774   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:12:40.225340   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:12:40.225474   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:12:40.247672   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:12:40.247777   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:12:40.263836   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:12:40.263909   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:12:40.276500   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:12:40.276575   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:12:40.287686   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:12:40.287749   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:12:40.298030   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:12:40.298103   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:12:40.308039   11947 logs.go:276] 0 containers: []
	W0701 05:12:40.308050   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:12:40.308103   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:12:40.317913   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:12:40.317928   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:12:40.317934   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:12:40.336101   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:12:40.336111   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:12:40.347593   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:12:40.347602   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:12:40.364449   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:12:40.364458   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:12:40.389591   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:12:40.389601   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:12:40.424390   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:12:40.424398   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:12:40.439038   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:12:40.439051   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:12:40.450258   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:12:40.450270   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:12:40.461860   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:12:40.461874   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:12:40.473260   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:12:40.473270   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:12:40.484438   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:12:40.484451   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:12:40.488693   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:12:40.488703   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:12:40.522046   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:12:40.522059   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:12:43.040177   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:48.042793   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:48.043250   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:12:48.084405   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:12:48.084524   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:12:48.106391   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:12:48.106503   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:12:48.122241   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:12:48.122316   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:12:48.134430   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:12:48.134501   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:12:48.153289   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:12:48.153353   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:12:48.164167   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:12:48.164228   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:12:48.174514   11947 logs.go:276] 0 containers: []
	W0701 05:12:48.174526   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:12:48.174577   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:12:48.185932   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:12:48.185945   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:12:48.185951   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:12:48.199934   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:12:48.199944   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:12:48.211444   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:12:48.211458   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:12:48.226969   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:12:48.226981   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:12:48.250060   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:12:48.250069   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:12:48.260897   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:12:48.260908   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:12:48.265589   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:12:48.265598   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:12:48.299745   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:12:48.299756   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:12:48.311974   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:12:48.311987   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:12:48.323436   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:12:48.323449   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:12:48.340444   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:12:48.340456   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:12:48.351572   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:12:48.351584   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:12:48.384742   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:12:48.384756   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:12:50.901029   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:12:55.903577   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:12:55.903964   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:12:55.936132   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:12:55.936256   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:12:55.955740   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:12:55.955829   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:12:55.970336   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:12:55.970409   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:12:55.982712   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:12:55.982774   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:12:55.995278   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:12:55.995342   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:12:56.005561   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:12:56.005626   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:12:56.016265   11947 logs.go:276] 0 containers: []
	W0701 05:12:56.016275   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:12:56.016328   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:12:56.026958   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:12:56.026974   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:12:56.026980   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:12:56.031405   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:12:56.031414   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:12:56.071453   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:12:56.071464   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:12:56.083332   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:12:56.083345   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:12:56.100684   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:12:56.100697   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:12:56.125600   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:12:56.125606   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:12:56.136975   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:12:56.136988   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:12:56.169807   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:12:56.169814   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:12:56.184946   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:12:56.184960   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:12:56.198483   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:12:56.198492   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:12:56.209987   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:12:56.209999   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:12:56.221281   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:12:56.221290   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:12:56.240674   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:12:56.240683   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:12:58.753420   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:03.756316   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:03.756753   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:03.790617   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:03.790741   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:03.812915   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:03.813010   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:03.826888   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:13:03.826967   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:03.839845   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:03.839915   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:03.850973   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:03.851044   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:03.862166   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:03.862229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:03.872658   11947 logs.go:276] 0 containers: []
	W0701 05:13:03.872672   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:03.872728   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:03.883818   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:03.883832   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:03.883837   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:03.898572   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:03.898584   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:03.914891   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:03.914905   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:03.926494   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:03.926505   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:03.937605   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:03.937617   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:03.942144   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:03.942150   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:03.980049   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:03.980061   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:03.995328   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:03.995337   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:04.006754   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:04.006766   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:04.018471   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:04.018484   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:04.033424   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:04.033436   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:04.051053   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:04.051062   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:04.076085   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:04.076092   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:06.612272   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:11.614851   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:11.615257   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:11.648675   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:11.648806   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:11.668131   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:11.668225   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:11.683038   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:13:11.683115   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:11.695147   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:11.695213   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:11.708353   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:11.708426   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:11.718622   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:11.718684   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:11.728760   11947 logs.go:276] 0 containers: []
	W0701 05:13:11.728772   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:11.728825   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:11.739569   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:11.739586   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:11.739591   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:11.753482   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:11.753491   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:11.765096   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:11.765109   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:11.787448   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:11.787458   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:11.799014   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:11.799026   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:11.834396   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:11.834403   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:11.838797   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:11.838807   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:11.874175   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:11.874188   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:11.887089   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:11.887097   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:11.911618   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:11.911628   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:11.922753   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:11.922765   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:11.944633   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:11.944642   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:11.956419   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:11.956432   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:14.483093   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:19.485808   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:19.486047   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:19.508023   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:19.508136   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:19.524118   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:19.524198   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:19.536262   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:13:19.536330   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:19.547294   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:19.547357   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:19.557561   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:19.557631   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:19.568048   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:19.568119   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:19.578459   11947 logs.go:276] 0 containers: []
	W0701 05:13:19.578469   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:19.578524   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:19.588974   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:19.588990   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:19.588994   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:19.602675   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:19.602688   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:19.618595   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:19.618606   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:19.629572   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:19.629584   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:19.634164   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:19.634171   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:19.648502   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:19.648514   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:19.668955   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:19.668966   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:19.680562   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:19.680573   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:19.691759   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:19.691770   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:19.709143   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:19.709155   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:19.732568   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:19.732575   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:19.743385   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:19.743396   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:19.777429   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:19.777440   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:22.314310   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:27.316948   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:27.317346   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:27.365111   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:27.365233   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:27.385721   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:27.385815   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:27.400361   11947 logs.go:276] 2 containers: [64291998e86e c6de92ff821f]
	I0701 05:13:27.400425   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:27.412904   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:27.412975   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:27.423429   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:27.423493   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:27.434102   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:27.434171   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:27.452134   11947 logs.go:276] 0 containers: []
	W0701 05:13:27.452143   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:27.452196   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:27.462534   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:27.462550   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:27.462555   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:27.474313   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:27.474324   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:27.499063   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:27.499070   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:27.533208   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:27.533217   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:27.537141   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:27.537147   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:27.552752   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:27.552764   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:27.566643   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:27.566654   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:27.578480   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:27.578490   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:27.589738   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:27.589748   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:27.625503   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:27.625515   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:27.640775   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:27.640784   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:27.658045   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:27.658055   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:27.669947   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:27.669961   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:30.185656   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:35.185909   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:35.185970   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:35.198970   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:35.199031   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:35.211397   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:35.211442   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:35.223039   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:13:35.223101   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:35.233863   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:35.233924   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:35.244587   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:35.244635   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:35.259094   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:35.259147   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:35.279621   11947 logs.go:276] 0 containers: []
	W0701 05:13:35.279629   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:35.279667   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:35.291504   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:35.291520   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:35.291525   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:35.325520   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:35.325537   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:35.341969   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:35.341982   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:35.385297   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:35.385315   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:35.402154   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:13:35.402169   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:13:35.415504   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:35.415517   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:35.420336   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:35.420351   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:35.434798   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:35.434811   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:35.452465   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:35.452475   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:35.466043   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:35.466054   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:35.483871   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:35.483881   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:35.508979   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:35.508986   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:35.521585   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:35.521594   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:35.536386   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:13:35.536395   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:13:35.548289   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:35.548297   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:38.073825   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:43.076278   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:43.076461   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:43.101801   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:43.101909   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:43.119247   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:43.119324   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:43.132575   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:13:43.132639   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:43.144716   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:43.144779   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:43.156243   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:43.156309   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:43.166585   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:43.166654   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:43.180534   11947 logs.go:276] 0 containers: []
	W0701 05:13:43.180544   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:43.180595   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:43.191402   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:43.191418   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:43.191424   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:43.202816   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:43.202828   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:43.214308   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:43.214319   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:43.247560   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:13:43.247567   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:13:43.258985   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:43.258995   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:43.270385   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:43.270395   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:43.281937   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:43.281948   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:43.286722   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:43.286731   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:43.300627   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:43.300636   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:43.317510   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:43.317520   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:43.343143   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:43.343152   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:43.377348   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:43.377360   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:43.397411   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:43.397422   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:43.412460   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:13:43.412472   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:13:43.423812   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:43.423829   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:45.944579   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:50.946897   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:50.947429   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:50.989665   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:50.989803   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:51.011410   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:51.011513   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:51.026372   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:13:51.026452   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:51.038833   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:51.038900   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:51.055491   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:51.055550   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:51.066915   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:51.066977   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:51.077476   11947 logs.go:276] 0 containers: []
	W0701 05:13:51.077485   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:51.077538   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:51.088448   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:51.088467   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:51.088472   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:51.100403   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:51.100416   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:51.116266   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:51.116278   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:51.150848   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:51.150856   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:13:51.155287   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:51.155296   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:51.167223   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:51.167237   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:51.202595   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:51.202607   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:51.216804   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:51.216815   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:51.228635   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:51.228646   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:51.242579   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:13:51.242592   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:13:51.254019   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:13:51.254031   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:13:51.269669   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:51.269681   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:51.287489   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:51.287498   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:51.299502   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:51.299512   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:51.324271   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:51.324280   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:53.837687   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:13:58.839752   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:13:58.839823   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:13:58.852510   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:13:58.852566   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:13:58.863472   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:13:58.863533   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:13:58.874357   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:13:58.874409   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:13:58.886943   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:13:58.886991   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:13:58.902663   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:13:58.902713   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:13:58.913359   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:13:58.913422   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:13:58.923339   11947 logs.go:276] 0 containers: []
	W0701 05:13:58.923347   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:13:58.923395   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:13:58.935033   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:13:58.935053   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:13:58.935061   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:13:58.961325   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:13:58.961337   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:13:58.975486   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:13:58.975496   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:13:58.988570   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:13:58.988586   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:13:59.000828   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:13:59.000840   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:13:59.019252   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:13:59.019263   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:13:59.031102   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:13:59.031112   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:13:59.046227   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:13:59.046237   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:13:59.061766   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:13:59.061776   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:13:59.074468   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:13:59.074480   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:13:59.086504   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:13:59.086515   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:13:59.101367   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:13:59.101375   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:13:59.135161   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:13:59.135172   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:13:59.170223   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:13:59.170236   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:13:59.187184   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:13:59.187200   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:01.693777   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:06.695392   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:06.695491   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:06.709006   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:06.709081   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:06.720738   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:06.720809   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:06.738603   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:06.738679   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:06.749924   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:06.749993   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:06.760259   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:06.760325   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:06.774051   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:06.774117   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:06.785725   11947 logs.go:276] 0 containers: []
	W0701 05:14:06.785736   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:06.785793   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:06.796629   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:06.796646   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:06.796652   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:06.801098   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:06.801105   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:06.815099   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:06.815111   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:06.829232   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:06.829244   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:06.853751   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:06.853765   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:06.871250   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:06.871259   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:06.894743   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:06.894750   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:06.906236   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:06.906251   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:06.917718   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:06.917729   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:06.932995   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:06.933005   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:06.944391   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:06.944399   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:06.978770   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:06.978778   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:07.012252   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:07.012266   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:07.023929   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:07.023940   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:07.035375   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:07.035385   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:09.547305   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:14.549690   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:14.550098   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:14.593511   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:14.593641   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:14.616581   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:14.616689   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:14.632510   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:14.632580   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:14.645160   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:14.645229   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:14.655843   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:14.655911   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:14.668958   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:14.669023   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:14.679263   11947 logs.go:276] 0 containers: []
	W0701 05:14:14.679273   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:14.679325   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:14.690304   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:14.690323   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:14.690328   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:14.724871   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:14.724884   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:14.741562   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:14.741574   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:14.753281   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:14.753291   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:14.776645   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:14.776656   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:14.788176   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:14.788188   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:14.823667   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:14.823684   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:14.839263   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:14.839275   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:14.854193   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:14.854204   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:14.870326   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:14.870337   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:14.882484   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:14.882493   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:14.898034   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:14.898047   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:14.923306   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:14.923313   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:14.934653   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:14.934665   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:14.939061   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:14.939068   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:17.453043   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:22.455791   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:22.455868   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:22.467433   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:22.467484   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:22.480977   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:22.481044   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:22.491900   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:22.491960   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:22.502920   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:22.502976   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:22.514704   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:22.514780   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:22.526716   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:22.526762   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:22.540760   11947 logs.go:276] 0 containers: []
	W0701 05:14:22.540770   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:22.540826   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:22.551820   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:22.551834   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:22.551839   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:22.568708   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:22.568717   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:22.584293   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:22.584305   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:22.598009   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:22.598019   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:22.611995   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:22.612010   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:22.636594   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:22.636607   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:22.671694   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:22.671709   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:22.688559   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:22.688570   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:22.700273   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:22.700281   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:22.704693   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:22.704700   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:22.718973   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:22.718984   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:22.732534   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:22.732546   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:22.750562   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:22.750573   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:22.769264   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:22.769278   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:22.783004   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:22.783013   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:25.322233   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:30.322672   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:30.322951   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:30.351694   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:30.351811   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:30.369656   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:30.369730   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:30.386894   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:30.386972   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:30.397719   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:30.397783   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:30.408473   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:30.408543   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:30.418760   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:30.418825   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:30.428832   11947 logs.go:276] 0 containers: []
	W0701 05:14:30.428840   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:30.428892   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:30.439239   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:30.439255   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:30.439260   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:30.456381   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:30.456392   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:30.467213   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:30.467228   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:30.479174   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:30.479185   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:30.490991   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:30.491003   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:30.502817   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:30.502825   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:30.517715   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:30.517724   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:30.535998   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:30.536012   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:30.548772   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:30.548785   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:30.560241   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:30.560252   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:30.594445   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:30.594452   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:30.629909   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:30.629921   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:30.645175   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:30.645187   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:30.671371   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:30.671392   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:30.684939   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:30.684955   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:33.190467   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:38.191872   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:38.192306   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:38.232255   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:38.232368   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:38.254503   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:38.254632   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:38.269017   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:38.269096   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:38.280952   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:38.281018   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:38.296082   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:38.296137   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:38.306389   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:38.306452   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:38.316876   11947 logs.go:276] 0 containers: []
	W0701 05:14:38.316887   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:38.316941   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:38.327198   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:38.327212   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:38.327217   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:38.338604   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:38.338618   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:38.371998   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:38.372006   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:38.387687   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:38.387697   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:38.399463   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:38.399474   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:38.403907   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:38.403914   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:38.415350   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:38.415359   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:38.439945   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:38.439953   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:38.451362   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:38.451373   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:38.465152   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:38.465163   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:38.480728   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:38.480738   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:38.499952   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:38.499961   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:38.511761   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:38.511773   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:38.546218   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:38.546231   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:38.558079   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:38.558089   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:41.082550   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:46.085213   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:46.085290   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:46.097279   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:46.097337   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:46.109101   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:46.109189   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:46.120708   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:46.120784   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:46.132109   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:46.132172   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:46.143718   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:46.143785   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:46.154943   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:46.155009   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:46.166474   11947 logs.go:276] 0 containers: []
	W0701 05:14:46.166486   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:46.166543   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:46.178635   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:46.178652   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:46.178656   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:46.197262   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:46.197272   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:46.211959   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:46.211969   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:46.226230   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:46.226241   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:46.261765   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:46.261780   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:46.277892   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:46.277899   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:46.289818   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:46.289828   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:46.303244   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:46.303257   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:46.315818   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:46.315829   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:46.319984   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:46.319989   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:46.332096   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:46.332107   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:46.348414   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:46.348426   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:46.360892   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:46.360901   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:46.386296   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:46.386322   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:46.400613   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:46.400623   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:48.940111   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:14:53.942976   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:14:53.943479   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:14:53.986732   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:14:53.986862   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:14:54.007695   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:14:54.007806   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:14:54.023265   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:14:54.023341   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:14:54.035563   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:14:54.035629   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:14:54.046513   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:14:54.046575   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:14:54.057129   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:14:54.057195   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:14:54.067152   11947 logs.go:276] 0 containers: []
	W0701 05:14:54.067162   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:14:54.067212   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:14:54.077711   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:14:54.077729   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:14:54.077734   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:14:54.096145   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:14:54.096157   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:14:54.122482   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:14:54.122492   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:14:54.146841   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:14:54.146849   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:14:54.160916   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:14:54.160927   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:14:54.172264   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:14:54.172274   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:14:54.183902   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:14:54.183914   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:14:54.200275   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:14:54.200286   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:14:54.211755   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:14:54.211767   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:14:54.231906   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:14:54.231917   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:14:54.236001   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:14:54.236010   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:14:54.291482   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:14:54.291492   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:14:54.303251   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:14:54.303260   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:14:54.314899   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:14:54.314911   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:14:54.326540   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:14:54.326551   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:14:56.861233   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:15:01.863495   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:15:01.863716   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:15:01.890914   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:15:01.891023   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:15:01.908396   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:15:01.908478   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:15:01.926560   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:15:01.926636   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:15:01.940295   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:15:01.940359   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:15:01.950950   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:15:01.951012   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:15:01.961803   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:15:01.961874   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:15:01.976220   11947 logs.go:276] 0 containers: []
	W0701 05:15:01.976232   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:15:01.976292   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:15:01.987382   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:15:01.987403   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:15:01.987408   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:15:02.001937   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:15:02.001948   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:15:02.014132   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:15:02.014142   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:15:02.025499   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:15:02.025509   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:15:02.060177   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:15:02.060186   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:15:02.064330   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:15:02.064338   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:15:02.078817   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:15:02.078828   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:15:02.103296   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:15:02.103305   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:15:02.137086   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:15:02.137097   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:15:02.148213   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:15:02.148225   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:15:02.159500   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:15:02.159512   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:15:02.177687   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:15:02.177696   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:15:02.199585   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:15:02.199595   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:15:02.214514   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:15:02.214528   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:15:02.234296   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:15:02.234320   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:15:04.748620   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:15:09.749906   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:15:09.749989   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 05:15:09.761957   11947 logs.go:276] 1 containers: [363175f1f73f]
	I0701 05:15:09.762044   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 05:15:09.773183   11947 logs.go:276] 1 containers: [ecba765927b9]
	I0701 05:15:09.773241   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 05:15:09.784993   11947 logs.go:276] 4 containers: [d5f4090daec8 1ea989bfcbcb 64291998e86e c6de92ff821f]
	I0701 05:15:09.785071   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 05:15:09.796317   11947 logs.go:276] 1 containers: [42746c7251e2]
	I0701 05:15:09.796390   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 05:15:09.809699   11947 logs.go:276] 1 containers: [6a423eeeccc3]
	I0701 05:15:09.809770   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 05:15:09.821883   11947 logs.go:276] 1 containers: [4df09050d49d]
	I0701 05:15:09.821963   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 05:15:09.835652   11947 logs.go:276] 0 containers: []
	W0701 05:15:09.835664   11947 logs.go:278] No container was found matching "kindnet"
	I0701 05:15:09.835713   11947 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0701 05:15:09.854196   11947 logs.go:276] 1 containers: [a0f37588cd5b]
	I0701 05:15:09.854213   11947 logs.go:123] Gathering logs for Docker ...
	I0701 05:15:09.854218   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 05:15:09.879901   11947 logs.go:123] Gathering logs for kubelet ...
	I0701 05:15:09.879919   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 05:15:09.916332   11947 logs.go:123] Gathering logs for dmesg ...
	I0701 05:15:09.916350   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 05:15:09.921372   11947 logs.go:123] Gathering logs for kube-apiserver [363175f1f73f] ...
	I0701 05:15:09.921382   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363175f1f73f"
	I0701 05:15:09.938122   11947 logs.go:123] Gathering logs for coredns [1ea989bfcbcb] ...
	I0701 05:15:09.938134   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ea989bfcbcb"
	I0701 05:15:09.950592   11947 logs.go:123] Gathering logs for kube-proxy [6a423eeeccc3] ...
	I0701 05:15:09.950606   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a423eeeccc3"
	I0701 05:15:09.963528   11947 logs.go:123] Gathering logs for describe nodes ...
	I0701 05:15:09.963539   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 05:15:10.000680   11947 logs.go:123] Gathering logs for coredns [d5f4090daec8] ...
	I0701 05:15:10.000692   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f4090daec8"
	I0701 05:15:10.013758   11947 logs.go:123] Gathering logs for kube-scheduler [42746c7251e2] ...
	I0701 05:15:10.013770   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42746c7251e2"
	I0701 05:15:10.030814   11947 logs.go:123] Gathering logs for kube-controller-manager [4df09050d49d] ...
	I0701 05:15:10.030827   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df09050d49d"
	I0701 05:15:10.050427   11947 logs.go:123] Gathering logs for container status ...
	I0701 05:15:10.050444   11947 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 05:15:10.063098   11947 logs.go:123] Gathering logs for etcd [ecba765927b9] ...
	I0701 05:15:10.063109   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba765927b9"
	I0701 05:15:10.079127   11947 logs.go:123] Gathering logs for coredns [64291998e86e] ...
	I0701 05:15:10.079139   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64291998e86e"
	I0701 05:15:10.094825   11947 logs.go:123] Gathering logs for coredns [c6de92ff821f] ...
	I0701 05:15:10.094836   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6de92ff821f"
	I0701 05:15:10.108510   11947 logs.go:123] Gathering logs for storage-provisioner [a0f37588cd5b] ...
	I0701 05:15:10.108518   11947 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f37588cd5b"
	I0701 05:15:12.622625   11947 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0701 05:15:17.623536   11947 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0701 05:15:17.629599   11947 out.go:177] 
	W0701 05:15:17.632612   11947 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0701 05:15:17.632638   11947 out.go:239] * 
	* 
	W0701 05:15:17.634341   11947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:15:17.650484   11947 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-841000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.31s)

                                                
                                    
x
+
TestPause/serial/Start (9.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-402000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-402000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.923378s)

                                                
                                                
-- stdout --
	* [pause-402000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-402000" primary control-plane node in "pause-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-402000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-402000 -n pause-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-402000 -n pause-402000: exit status 7 (44.358583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-730000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-730000 --driver=qemu2 : exit status 80 (9.804509708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-730000" primary control-plane node in "NoKubernetes-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-730000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000: exit status 7 (40.977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --driver=qemu2 : exit status 80 (5.236762792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-730000
	* Restarting existing qemu2 VM for "NoKubernetes-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000: exit status 7 (59.233208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --driver=qemu2 : exit status 80 (5.233385417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-730000
	* Restarting existing qemu2 VM for "NoKubernetes-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000: exit status 7 (36.893084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-730000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-730000 --driver=qemu2 : exit status 80 (5.247651375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-730000
	* Restarting existing qemu2 VM for "NoKubernetes-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-730000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-730000 -n NoKubernetes-730000: exit status 7 (47.059333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.703271166s)

                                                
                                                
-- stdout --
	* [auto-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-731000" primary control-plane node in "auto-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:13:35.210537   12249 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:13:35.210698   12249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:13:35.210702   12249 out.go:304] Setting ErrFile to fd 2...
	I0701 05:13:35.210708   12249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:13:35.210853   12249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:13:35.212271   12249 out.go:298] Setting JSON to false
	I0701 05:13:35.230782   12249 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7984,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:13:35.230868   12249 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:13:35.235533   12249 out.go:177] * [auto-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:13:35.242570   12249 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:13:35.242658   12249 notify.go:220] Checking for updates...
	I0701 05:13:35.249534   12249 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:13:35.252519   12249 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:13:35.255584   12249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:13:35.258493   12249 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:13:35.261566   12249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:13:35.264955   12249 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:13:35.265030   12249 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:13:35.265076   12249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:13:35.268528   12249 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:13:35.275518   12249 start.go:297] selected driver: qemu2
	I0701 05:13:35.275528   12249 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:13:35.275535   12249 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:13:35.277910   12249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:13:35.281497   12249 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:13:35.284627   12249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:13:35.284647   12249 cni.go:84] Creating CNI manager for ""
	I0701 05:13:35.284663   12249 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:13:35.284669   12249 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:13:35.284703   12249 start.go:340] cluster config:
	{Name:auto-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:13:35.288626   12249 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:13:35.295498   12249 out.go:177] * Starting "auto-731000" primary control-plane node in "auto-731000" cluster
	I0701 05:13:35.299518   12249 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:13:35.299547   12249 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:13:35.299563   12249 cache.go:56] Caching tarball of preloaded images
	I0701 05:13:35.299641   12249 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:13:35.299648   12249 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:13:35.299728   12249 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/auto-731000/config.json ...
	I0701 05:13:35.299739   12249 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/auto-731000/config.json: {Name:mk07f24be160162555f2728a5de70584ac10659b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:13:35.300059   12249 start.go:360] acquireMachinesLock for auto-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:13:35.300099   12249 start.go:364] duration metric: took 33.875µs to acquireMachinesLock for "auto-731000"
	I0701 05:13:35.300114   12249 start.go:93] Provisioning new machine with config: &{Name:auto-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:13:35.300140   12249 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:13:35.304530   12249 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:13:35.320079   12249 start.go:159] libmachine.API.Create for "auto-731000" (driver="qemu2")
	I0701 05:13:35.320116   12249 client.go:168] LocalClient.Create starting
	I0701 05:13:35.320179   12249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:13:35.320209   12249 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:35.320218   12249 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:35.320254   12249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:13:35.320276   12249 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:35.320284   12249 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:35.320673   12249 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:13:35.452180   12249 main.go:141] libmachine: Creating SSH key...
	I0701 05:13:35.535847   12249 main.go:141] libmachine: Creating Disk image...
	I0701 05:13:35.535856   12249 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:13:35.536076   12249 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2
	I0701 05:13:35.546576   12249 main.go:141] libmachine: STDOUT: 
	I0701 05:13:35.546610   12249 main.go:141] libmachine: STDERR: 
	I0701 05:13:35.546674   12249 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2 +20000M
	I0701 05:13:35.555784   12249 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:13:35.555808   12249 main.go:141] libmachine: STDERR: 
	I0701 05:13:35.555829   12249 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2
	I0701 05:13:35.555833   12249 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:13:35.555863   12249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:58:c7:9a:5d:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2
	I0701 05:13:35.557785   12249 main.go:141] libmachine: STDOUT: 
	I0701 05:13:35.557803   12249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:13:35.557823   12249 client.go:171] duration metric: took 237.700083ms to LocalClient.Create
	I0701 05:13:37.560050   12249 start.go:128] duration metric: took 2.25986125s to createHost
	I0701 05:13:37.560222   12249 start.go:83] releasing machines lock for "auto-731000", held for 2.26009825s
	W0701 05:13:37.560279   12249 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:13:37.571602   12249 out.go:177] * Deleting "auto-731000" in qemu2 ...
	W0701 05:13:37.596987   12249 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:13:37.597033   12249 start.go:728] Will try again in 5 seconds ...
	I0701 05:13:42.599185   12249 start.go:360] acquireMachinesLock for auto-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:13:42.599447   12249 start.go:364] duration metric: took 204.625µs to acquireMachinesLock for "auto-731000"
	I0701 05:13:42.599510   12249 start.go:93] Provisioning new machine with config: &{Name:auto-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:13:42.599613   12249 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:13:42.611878   12249 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:13:42.644009   12249 start.go:159] libmachine.API.Create for "auto-731000" (driver="qemu2")
	I0701 05:13:42.644057   12249 client.go:168] LocalClient.Create starting
	I0701 05:13:42.644149   12249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:13:42.644203   12249 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:42.644218   12249 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:42.644276   12249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:13:42.644311   12249 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:42.644324   12249 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:42.644893   12249 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:13:42.779597   12249 main.go:141] libmachine: Creating SSH key...
	I0701 05:13:42.827980   12249 main.go:141] libmachine: Creating Disk image...
	I0701 05:13:42.827986   12249 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:13:42.828150   12249 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2
	I0701 05:13:42.837506   12249 main.go:141] libmachine: STDOUT: 
	I0701 05:13:42.837528   12249 main.go:141] libmachine: STDERR: 
	I0701 05:13:42.837579   12249 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2 +20000M
	I0701 05:13:42.846092   12249 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:13:42.846109   12249 main.go:141] libmachine: STDERR: 
	I0701 05:13:42.846134   12249 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2
	I0701 05:13:42.846139   12249 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:13:42.846171   12249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e4:93:c4:54:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/auto-731000/disk.qcow2
	I0701 05:13:42.848001   12249 main.go:141] libmachine: STDOUT: 
	I0701 05:13:42.848019   12249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:13:42.848046   12249 client.go:171] duration metric: took 203.982125ms to LocalClient.Create
	I0701 05:13:44.850180   12249 start.go:128] duration metric: took 2.250531875s to createHost
	I0701 05:13:44.850220   12249 start.go:83] releasing machines lock for "auto-731000", held for 2.250747125s
	W0701 05:13:44.850376   12249 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:13:44.860752   12249 out.go:177] 
	W0701 05:13:44.864783   12249 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:13:44.864802   12249 out.go:239] * 
	* 
	W0701 05:13:44.865826   12249 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:13:44.873733   12249 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.689730667s)

                                                
                                                
-- stdout --
	* [kindnet-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-731000" primary control-plane node in "kindnet-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:13:47.081188   12362 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:13:47.081353   12362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:13:47.081356   12362 out.go:304] Setting ErrFile to fd 2...
	I0701 05:13:47.081358   12362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:13:47.081518   12362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:13:47.082840   12362 out.go:298] Setting JSON to false
	I0701 05:13:47.099533   12362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7996,"bootTime":1719828031,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:13:47.099600   12362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:13:47.103780   12362 out.go:177] * [kindnet-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:13:47.109676   12362 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:13:47.109706   12362 notify.go:220] Checking for updates...
	I0701 05:13:47.116655   12362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:13:47.119665   12362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:13:47.122673   12362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:13:47.125696   12362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:13:47.128600   12362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:13:47.132110   12362 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:13:47.132172   12362 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:13:47.132222   12362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:13:47.136675   12362 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:13:47.143624   12362 start.go:297] selected driver: qemu2
	I0701 05:13:47.143630   12362 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:13:47.143635   12362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:13:47.145809   12362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:13:47.148735   12362 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:13:47.151723   12362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:13:47.151749   12362 cni.go:84] Creating CNI manager for "kindnet"
	I0701 05:13:47.151753   12362 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 05:13:47.151784   12362 start.go:340] cluster config:
	{Name:kindnet-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:13:47.155199   12362 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:13:47.162657   12362 out.go:177] * Starting "kindnet-731000" primary control-plane node in "kindnet-731000" cluster
	I0701 05:13:47.166673   12362 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:13:47.166687   12362 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:13:47.166695   12362 cache.go:56] Caching tarball of preloaded images
	I0701 05:13:47.166747   12362 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:13:47.166752   12362 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:13:47.166803   12362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kindnet-731000/config.json ...
	I0701 05:13:47.166813   12362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kindnet-731000/config.json: {Name:mk645dd1ce2f77922f6fba63c86687ce6e2d8a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:13:47.167025   12362 start.go:360] acquireMachinesLock for kindnet-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:13:47.167055   12362 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "kindnet-731000"
	I0701 05:13:47.167067   12362 start.go:93] Provisioning new machine with config: &{Name:kindnet-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:13:47.167098   12362 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:13:47.175666   12362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:13:47.190808   12362 start.go:159] libmachine.API.Create for "kindnet-731000" (driver="qemu2")
	I0701 05:13:47.190832   12362 client.go:168] LocalClient.Create starting
	I0701 05:13:47.190893   12362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:13:47.190921   12362 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:47.190931   12362 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:47.190969   12362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:13:47.190991   12362 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:47.191000   12362 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:47.191333   12362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:13:47.318347   12362 main.go:141] libmachine: Creating SSH key...
	I0701 05:13:47.365648   12362 main.go:141] libmachine: Creating Disk image...
	I0701 05:13:47.365665   12362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:13:47.365823   12362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2
	I0701 05:13:47.374972   12362 main.go:141] libmachine: STDOUT: 
	I0701 05:13:47.374989   12362 main.go:141] libmachine: STDERR: 
	I0701 05:13:47.375030   12362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2 +20000M
	I0701 05:13:47.383110   12362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:13:47.383124   12362 main.go:141] libmachine: STDERR: 
	I0701 05:13:47.383147   12362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2
	I0701 05:13:47.383151   12362 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:13:47.383177   12362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:b1:85:3a:cb:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2
	I0701 05:13:47.384787   12362 main.go:141] libmachine: STDOUT: 
	I0701 05:13:47.384801   12362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:13:47.384820   12362 client.go:171] duration metric: took 193.980875ms to LocalClient.Create
	I0701 05:13:49.387028   12362 start.go:128] duration metric: took 2.21988825s to createHost
	I0701 05:13:49.387102   12362 start.go:83] releasing machines lock for "kindnet-731000", held for 2.220024292s
	W0701 05:13:49.387188   12362 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:13:49.398551   12362 out.go:177] * Deleting "kindnet-731000" in qemu2 ...
	W0701 05:13:49.426612   12362 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:13:49.426638   12362 start.go:728] Will try again in 5 seconds ...
	I0701 05:13:54.428905   12362 start.go:360] acquireMachinesLock for kindnet-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:13:54.429339   12362 start.go:364] duration metric: took 336.417µs to acquireMachinesLock for "kindnet-731000"
	I0701 05:13:54.429457   12362 start.go:93] Provisioning new machine with config: &{Name:kindnet-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:13:54.429810   12362 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:13:54.438312   12362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:13:54.480882   12362 start.go:159] libmachine.API.Create for "kindnet-731000" (driver="qemu2")
	I0701 05:13:54.480930   12362 client.go:168] LocalClient.Create starting
	I0701 05:13:54.481056   12362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:13:54.481119   12362 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:54.481137   12362 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:54.481201   12362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:13:54.481241   12362 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:54.481258   12362 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:54.481903   12362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:13:54.616789   12362 main.go:141] libmachine: Creating SSH key...
	I0701 05:13:54.686457   12362 main.go:141] libmachine: Creating Disk image...
	I0701 05:13:54.686463   12362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:13:54.686644   12362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2
	I0701 05:13:54.696473   12362 main.go:141] libmachine: STDOUT: 
	I0701 05:13:54.696496   12362 main.go:141] libmachine: STDERR: 
	I0701 05:13:54.696543   12362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2 +20000M
	I0701 05:13:54.704854   12362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:13:54.704874   12362 main.go:141] libmachine: STDERR: 
	I0701 05:13:54.704892   12362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2
	I0701 05:13:54.704896   12362 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:13:54.704943   12362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0b:11:2a:70:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kindnet-731000/disk.qcow2
	I0701 05:13:54.707091   12362 main.go:141] libmachine: STDOUT: 
	I0701 05:13:54.707112   12362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:13:54.707125   12362 client.go:171] duration metric: took 226.186833ms to LocalClient.Create
	I0701 05:13:56.709277   12362 start.go:128] duration metric: took 2.279430166s to createHost
	I0701 05:13:56.709337   12362 start.go:83] releasing machines lock for "kindnet-731000", held for 2.279968209s
	W0701 05:13:56.709579   12362 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:13:56.717115   12362 out.go:177] 
	W0701 05:13:56.722007   12362 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:13:56.722052   12362 out.go:239] * 
	* 
	W0701 05:13:56.723449   12362 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:13:56.733053   12362 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.863788542s)

                                                
                                                
-- stdout --
	* [flannel-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-731000" primary control-plane node in "flannel-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:13:59.066988   12477 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:13:59.067137   12477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:13:59.067141   12477 out.go:304] Setting ErrFile to fd 2...
	I0701 05:13:59.067143   12477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:13:59.067300   12477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:13:59.068624   12477 out.go:298] Setting JSON to false
	I0701 05:13:59.087137   12477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8008,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:13:59.087245   12477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:13:59.092064   12477 out.go:177] * [flannel-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:13:59.099321   12477 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:13:59.099422   12477 notify.go:220] Checking for updates...
	I0701 05:13:59.105186   12477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:13:59.108188   12477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:13:59.109594   12477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:13:59.113148   12477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:13:59.116157   12477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:13:59.119525   12477 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:13:59.119586   12477 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:13:59.119637   12477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:13:59.123159   12477 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:13:59.130228   12477 start.go:297] selected driver: qemu2
	I0701 05:13:59.130235   12477 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:13:59.130241   12477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:13:59.132349   12477 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:13:59.135164   12477 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:13:59.138296   12477 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:13:59.138321   12477 cni.go:84] Creating CNI manager for "flannel"
	I0701 05:13:59.138325   12477 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0701 05:13:59.138367   12477 start.go:340] cluster config:
	{Name:flannel-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:13:59.142164   12477 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:13:59.149174   12477 out.go:177] * Starting "flannel-731000" primary control-plane node in "flannel-731000" cluster
	I0701 05:13:59.153200   12477 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:13:59.153221   12477 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:13:59.153230   12477 cache.go:56] Caching tarball of preloaded images
	I0701 05:13:59.153305   12477 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:13:59.153311   12477 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:13:59.153369   12477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/flannel-731000/config.json ...
	I0701 05:13:59.153380   12477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/flannel-731000/config.json: {Name:mkf450ea909143dbec7f483feafc61fe0cadb5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:13:59.153682   12477 start.go:360] acquireMachinesLock for flannel-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:13:59.153724   12477 start.go:364] duration metric: took 35.291µs to acquireMachinesLock for "flannel-731000"
	I0701 05:13:59.153738   12477 start.go:93] Provisioning new machine with config: &{Name:flannel-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:13:59.153765   12477 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:13:59.158214   12477 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:13:59.174417   12477 start.go:159] libmachine.API.Create for "flannel-731000" (driver="qemu2")
	I0701 05:13:59.174438   12477 client.go:168] LocalClient.Create starting
	I0701 05:13:59.174514   12477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:13:59.174547   12477 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:59.174555   12477 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:59.174608   12477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:13:59.174631   12477 main.go:141] libmachine: Decoding PEM data...
	I0701 05:13:59.174642   12477 main.go:141] libmachine: Parsing certificate...
	I0701 05:13:59.175080   12477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:13:59.303399   12477 main.go:141] libmachine: Creating SSH key...
	I0701 05:13:59.480319   12477 main.go:141] libmachine: Creating Disk image...
	I0701 05:13:59.480326   12477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:13:59.480529   12477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2
	I0701 05:13:59.490244   12477 main.go:141] libmachine: STDOUT: 
	I0701 05:13:59.490292   12477 main.go:141] libmachine: STDERR: 
	I0701 05:13:59.490342   12477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2 +20000M
	I0701 05:13:59.498309   12477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:13:59.498327   12477 main.go:141] libmachine: STDERR: 
	I0701 05:13:59.498347   12477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2
	I0701 05:13:59.498352   12477 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:13:59.498380   12477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:88:f0:14:d0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2
	I0701 05:13:59.500113   12477 main.go:141] libmachine: STDOUT: 
	I0701 05:13:59.500126   12477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:13:59.500146   12477 client.go:171] duration metric: took 325.699375ms to LocalClient.Create
	I0701 05:14:01.502364   12477 start.go:128] duration metric: took 2.348561417s to createHost
	I0701 05:14:01.502429   12477 start.go:83] releasing machines lock for "flannel-731000", held for 2.348681084s
	W0701 05:14:01.502521   12477 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:01.509862   12477 out.go:177] * Deleting "flannel-731000" in qemu2 ...
	W0701 05:14:01.535842   12477 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:01.535872   12477 start.go:728] Will try again in 5 seconds ...
	I0701 05:14:06.538170   12477 start.go:360] acquireMachinesLock for flannel-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:06.538801   12477 start.go:364] duration metric: took 453.667µs to acquireMachinesLock for "flannel-731000"
	I0701 05:14:06.538937   12477 start.go:93] Provisioning new machine with config: &{Name:flannel-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:06.539198   12477 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:06.544768   12477 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:06.596495   12477 start.go:159] libmachine.API.Create for "flannel-731000" (driver="qemu2")
	I0701 05:14:06.596546   12477 client.go:168] LocalClient.Create starting
	I0701 05:14:06.596672   12477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:06.596748   12477 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:06.596766   12477 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:06.596830   12477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:06.596877   12477 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:06.596888   12477 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:06.597521   12477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:06.739393   12477 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:06.835714   12477 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:06.835725   12477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:06.835935   12477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2
	I0701 05:14:06.846665   12477 main.go:141] libmachine: STDOUT: 
	I0701 05:14:06.846688   12477 main.go:141] libmachine: STDERR: 
	I0701 05:14:06.846773   12477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2 +20000M
	I0701 05:14:06.856044   12477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:06.856074   12477 main.go:141] libmachine: STDERR: 
	I0701 05:14:06.856086   12477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2
	I0701 05:14:06.856093   12477 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:06.856123   12477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:2c:c8:5c:a3:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/flannel-731000/disk.qcow2
	I0701 05:14:06.858266   12477 main.go:141] libmachine: STDOUT: 
	I0701 05:14:06.858284   12477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:06.858297   12477 client.go:171] duration metric: took 261.742834ms to LocalClient.Create
	I0701 05:14:08.860500   12477 start.go:128] duration metric: took 2.321253875s to createHost
	I0701 05:14:08.860570   12477 start.go:83] releasing machines lock for "flannel-731000", held for 2.321732292s
	W0701 05:14:08.860910   12477 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:08.870488   12477 out.go:177] 
	W0701 05:14:08.875644   12477 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:14:08.875710   12477 out.go:239] * 
	* 
	W0701 05:14:08.877279   12477 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:14:08.887562   12477 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.822376625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-731000" primary control-plane node in "enable-default-cni-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:14:11.282152   12594 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:14:11.282277   12594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:11.282280   12594 out.go:304] Setting ErrFile to fd 2...
	I0701 05:14:11.282283   12594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:11.282402   12594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:14:11.283437   12594 out.go:298] Setting JSON to false
	I0701 05:14:11.299472   12594 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8020,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:14:11.299571   12594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:14:11.304896   12594 out.go:177] * [enable-default-cni-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:14:11.311748   12594 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:14:11.311843   12594 notify.go:220] Checking for updates...
	I0701 05:14:11.318874   12594 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:14:11.320304   12594 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:14:11.323824   12594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:14:11.326829   12594 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:14:11.329914   12594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:14:11.333217   12594 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:14:11.333289   12594 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:14:11.333341   12594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:14:11.336874   12594 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:14:11.343790   12594 start.go:297] selected driver: qemu2
	I0701 05:14:11.343797   12594 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:14:11.343802   12594 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:14:11.345851   12594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:14:11.353050   12594 out.go:177] * Automatically selected the socket_vmnet network
	E0701 05:14:11.355979   12594 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0701 05:14:11.355993   12594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:14:11.356033   12594 cni.go:84] Creating CNI manager for "bridge"
	I0701 05:14:11.356038   12594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:14:11.356068   12594 start.go:340] cluster config:
	{Name:enable-default-cni-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:14:11.359408   12594 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:14:11.366891   12594 out.go:177] * Starting "enable-default-cni-731000" primary control-plane node in "enable-default-cni-731000" cluster
	I0701 05:14:11.369762   12594 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:14:11.369775   12594 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:14:11.369781   12594 cache.go:56] Caching tarball of preloaded images
	I0701 05:14:11.369830   12594 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:14:11.369835   12594 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:14:11.369889   12594 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/enable-default-cni-731000/config.json ...
	I0701 05:14:11.369899   12594 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/enable-default-cni-731000/config.json: {Name:mk1638332fb419bc91c65e62642f1bddbde01806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:14:11.370350   12594 start.go:360] acquireMachinesLock for enable-default-cni-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:11.370387   12594 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "enable-default-cni-731000"
	I0701 05:14:11.370400   12594 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:11.370425   12594 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:11.377793   12594 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:11.393275   12594 start.go:159] libmachine.API.Create for "enable-default-cni-731000" (driver="qemu2")
	I0701 05:14:11.393303   12594 client.go:168] LocalClient.Create starting
	I0701 05:14:11.393372   12594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:11.393404   12594 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:11.393413   12594 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:11.393463   12594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:11.393487   12594 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:11.393498   12594 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:11.393929   12594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:11.524177   12594 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:11.649278   12594 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:11.649284   12594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:11.649455   12594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2
	I0701 05:14:11.658851   12594 main.go:141] libmachine: STDOUT: 
	I0701 05:14:11.658866   12594 main.go:141] libmachine: STDERR: 
	I0701 05:14:11.658917   12594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2 +20000M
	I0701 05:14:11.667260   12594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:11.667276   12594 main.go:141] libmachine: STDERR: 
	I0701 05:14:11.667292   12594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2
	I0701 05:14:11.667303   12594 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:11.667339   12594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:cb:44:1c:e0:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2
	I0701 05:14:11.669106   12594 main.go:141] libmachine: STDOUT: 
	I0701 05:14:11.669120   12594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:11.669140   12594 client.go:171] duration metric: took 275.830083ms to LocalClient.Create
	I0701 05:14:13.671336   12594 start.go:128] duration metric: took 2.3008695s to createHost
	I0701 05:14:13.671410   12594 start.go:83] releasing machines lock for "enable-default-cni-731000", held for 2.301000417s
	W0701 05:14:13.671485   12594 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:13.681563   12594 out.go:177] * Deleting "enable-default-cni-731000" in qemu2 ...
	W0701 05:14:13.701769   12594 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:13.701794   12594 start.go:728] Will try again in 5 seconds ...
	I0701 05:14:18.703993   12594 start.go:360] acquireMachinesLock for enable-default-cni-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:18.704434   12594 start.go:364] duration metric: took 356.375µs to acquireMachinesLock for "enable-default-cni-731000"
	I0701 05:14:18.704616   12594 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:18.704883   12594 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:18.715390   12594 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:18.756207   12594 start.go:159] libmachine.API.Create for "enable-default-cni-731000" (driver="qemu2")
	I0701 05:14:18.756257   12594 client.go:168] LocalClient.Create starting
	I0701 05:14:18.756374   12594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:18.756433   12594 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:18.756448   12594 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:18.756508   12594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:18.756548   12594 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:18.756559   12594 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:18.757145   12594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:18.895593   12594 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:19.015466   12594 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:19.015475   12594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:19.015667   12594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2
	I0701 05:14:19.025272   12594 main.go:141] libmachine: STDOUT: 
	I0701 05:14:19.025289   12594 main.go:141] libmachine: STDERR: 
	I0701 05:14:19.025345   12594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2 +20000M
	I0701 05:14:19.033206   12594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:19.033240   12594 main.go:141] libmachine: STDERR: 
	I0701 05:14:19.033254   12594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2
	I0701 05:14:19.033258   12594 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:19.033292   12594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:68:ce:f0:ea:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/enable-default-cni-731000/disk.qcow2
	I0701 05:14:19.034993   12594 main.go:141] libmachine: STDOUT: 
	I0701 05:14:19.035026   12594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:19.035040   12594 client.go:171] duration metric: took 278.774333ms to LocalClient.Create
	I0701 05:14:21.037246   12594 start.go:128] duration metric: took 2.332313875s to createHost
	I0701 05:14:21.037320   12594 start.go:83] releasing machines lock for "enable-default-cni-731000", held for 2.332801125s
	W0701 05:14:21.037685   12594 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:21.045223   12594 out.go:177] 
	W0701 05:14:21.050253   12594 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:14:21.050271   12594 out.go:239] * 
	* 
	W0701 05:14:21.052196   12594 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:14:21.061240   12594 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.737530041s)

                                                
                                                
-- stdout --
	* [bridge-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-731000" primary control-plane node in "bridge-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:14:23.239371   12707 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:14:23.239512   12707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:23.239517   12707 out.go:304] Setting ErrFile to fd 2...
	I0701 05:14:23.239520   12707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:23.239681   12707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:14:23.240732   12707 out.go:298] Setting JSON to false
	I0701 05:14:23.256974   12707 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8032,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:14:23.257047   12707 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:14:23.264880   12707 out.go:177] * [bridge-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:14:23.268850   12707 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:14:23.268872   12707 notify.go:220] Checking for updates...
	I0701 05:14:23.275855   12707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:14:23.278837   12707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:14:23.281840   12707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:14:23.284856   12707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:14:23.287805   12707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:14:23.291146   12707 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:14:23.291215   12707 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:14:23.291267   12707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:14:23.294851   12707 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:14:23.301813   12707 start.go:297] selected driver: qemu2
	I0701 05:14:23.301819   12707 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:14:23.301832   12707 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:14:23.304172   12707 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:14:23.307825   12707 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:14:23.309381   12707 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:14:23.309409   12707 cni.go:84] Creating CNI manager for "bridge"
	I0701 05:14:23.309413   12707 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:14:23.309450   12707 start.go:340] cluster config:
	{Name:bridge-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:14:23.313200   12707 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:14:23.320871   12707 out.go:177] * Starting "bridge-731000" primary control-plane node in "bridge-731000" cluster
	I0701 05:14:23.324792   12707 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:14:23.324804   12707 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:14:23.324812   12707 cache.go:56] Caching tarball of preloaded images
	I0701 05:14:23.324865   12707 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:14:23.324870   12707 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:14:23.324925   12707 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/bridge-731000/config.json ...
	I0701 05:14:23.324936   12707 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/bridge-731000/config.json: {Name:mk47cfc64227e0272b58191b8274c71e931944f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:14:23.325377   12707 start.go:360] acquireMachinesLock for bridge-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:23.325412   12707 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "bridge-731000"
	I0701 05:14:23.325425   12707 start.go:93] Provisioning new machine with config: &{Name:bridge-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:23.325455   12707 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:23.333762   12707 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:23.350084   12707 start.go:159] libmachine.API.Create for "bridge-731000" (driver="qemu2")
	I0701 05:14:23.350109   12707 client.go:168] LocalClient.Create starting
	I0701 05:14:23.350172   12707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:23.350201   12707 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:23.350209   12707 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:23.350250   12707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:23.350272   12707 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:23.350278   12707 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:23.350779   12707 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:23.478798   12707 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:23.578076   12707 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:23.578081   12707 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:23.578246   12707 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2
	I0701 05:14:23.587592   12707 main.go:141] libmachine: STDOUT: 
	I0701 05:14:23.587609   12707 main.go:141] libmachine: STDERR: 
	I0701 05:14:23.587650   12707 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2 +20000M
	I0701 05:14:23.595465   12707 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:23.595479   12707 main.go:141] libmachine: STDERR: 
	I0701 05:14:23.595496   12707 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2
	I0701 05:14:23.595504   12707 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:23.595534   12707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ba:92:d8:05:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2
	I0701 05:14:23.597144   12707 main.go:141] libmachine: STDOUT: 
	I0701 05:14:23.597157   12707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:23.597177   12707 client.go:171] duration metric: took 247.061167ms to LocalClient.Create
	I0701 05:14:25.599385   12707 start.go:128] duration metric: took 2.273888542s to createHost
	I0701 05:14:25.599479   12707 start.go:83] releasing machines lock for "bridge-731000", held for 2.274042375s
	W0701 05:14:25.599559   12707 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:25.609569   12707 out.go:177] * Deleting "bridge-731000" in qemu2 ...
	W0701 05:14:25.634469   12707 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:25.634501   12707 start.go:728] Will try again in 5 seconds ...
	I0701 05:14:30.636651   12707 start.go:360] acquireMachinesLock for bridge-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:30.636772   12707 start.go:364] duration metric: took 87.5µs to acquireMachinesLock for "bridge-731000"
	I0701 05:14:30.636787   12707 start.go:93] Provisioning new machine with config: &{Name:bridge-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:30.636823   12707 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:30.645367   12707 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:30.661303   12707 start.go:159] libmachine.API.Create for "bridge-731000" (driver="qemu2")
	I0701 05:14:30.661331   12707 client.go:168] LocalClient.Create starting
	I0701 05:14:30.661397   12707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:30.661430   12707 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:30.661439   12707 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:30.661473   12707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:30.661495   12707 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:30.661500   12707 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:30.661788   12707 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:30.791778   12707 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:30.885636   12707 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:30.885645   12707 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:30.885837   12707 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2
	I0701 05:14:30.895320   12707 main.go:141] libmachine: STDOUT: 
	I0701 05:14:30.895350   12707 main.go:141] libmachine: STDERR: 
	I0701 05:14:30.895417   12707 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2 +20000M
	I0701 05:14:30.903610   12707 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:30.903624   12707 main.go:141] libmachine: STDERR: 
	I0701 05:14:30.903646   12707 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2
	I0701 05:14:30.903651   12707 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:30.903689   12707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f1:de:19:43:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/bridge-731000/disk.qcow2
	I0701 05:14:30.905520   12707 main.go:141] libmachine: STDOUT: 
	I0701 05:14:30.905543   12707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:30.905558   12707 client.go:171] duration metric: took 244.222292ms to LocalClient.Create
	I0701 05:14:32.907801   12707 start.go:128] duration metric: took 2.270937917s to createHost
	I0701 05:14:32.907877   12707 start.go:83] releasing machines lock for "bridge-731000", held for 2.271081625s
	W0701 05:14:32.908179   12707 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:32.916701   12707 out.go:177] 
	W0701 05:14:32.922860   12707 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:14:32.922927   12707 out.go:239] * 
	* 
	W0701 05:14:32.925998   12707 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:14:32.933649   12707 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.824569625s)

                                                
                                                
-- stdout --
	* [kubenet-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-731000" primary control-plane node in "kubenet-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:14:35.059587   12822 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:14:35.059739   12822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:35.059742   12822 out.go:304] Setting ErrFile to fd 2...
	I0701 05:14:35.059745   12822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:35.059873   12822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:14:35.061036   12822 out.go:298] Setting JSON to false
	I0701 05:14:35.077567   12822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8044,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:14:35.077636   12822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:14:35.083726   12822 out.go:177] * [kubenet-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:14:35.091686   12822 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:14:35.091802   12822 notify.go:220] Checking for updates...
	I0701 05:14:35.100579   12822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:14:35.103626   12822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:14:35.106619   12822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:14:35.108228   12822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:14:35.111656   12822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:14:35.114971   12822 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:14:35.115040   12822 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:14:35.115091   12822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:14:35.118507   12822 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:14:35.125609   12822 start.go:297] selected driver: qemu2
	I0701 05:14:35.125615   12822 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:14:35.125620   12822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:14:35.127867   12822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:14:35.130703   12822 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:14:35.133651   12822 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:14:35.133665   12822 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0701 05:14:35.133689   12822 start.go:340] cluster config:
	{Name:kubenet-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubenet-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:14:35.137301   12822 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:14:35.143589   12822 out.go:177] * Starting "kubenet-731000" primary control-plane node in "kubenet-731000" cluster
	I0701 05:14:35.147659   12822 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:14:35.147675   12822 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:14:35.147685   12822 cache.go:56] Caching tarball of preloaded images
	I0701 05:14:35.147750   12822 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:14:35.147756   12822 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:14:35.147833   12822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kubenet-731000/config.json ...
	I0701 05:14:35.147843   12822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/kubenet-731000/config.json: {Name:mk7b20dbeb785aea7a85493517d7635129b284fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:14:35.148281   12822 start.go:360] acquireMachinesLock for kubenet-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:35.148312   12822 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "kubenet-731000"
	I0701 05:14:35.148325   12822 start.go:93] Provisioning new machine with config: &{Name:kubenet-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:35.148350   12822 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:35.151638   12822 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:35.167941   12822 start.go:159] libmachine.API.Create for "kubenet-731000" (driver="qemu2")
	I0701 05:14:35.167977   12822 client.go:168] LocalClient.Create starting
	I0701 05:14:35.168058   12822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:35.168091   12822 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:35.168102   12822 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:35.168144   12822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:35.168167   12822 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:35.168175   12822 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:35.168548   12822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:35.296157   12822 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:35.390026   12822 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:35.390035   12822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:35.390229   12822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2
	I0701 05:14:35.399387   12822 main.go:141] libmachine: STDOUT: 
	I0701 05:14:35.399406   12822 main.go:141] libmachine: STDERR: 
	I0701 05:14:35.399472   12822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2 +20000M
	I0701 05:14:35.407460   12822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:35.407473   12822 main.go:141] libmachine: STDERR: 
	I0701 05:14:35.407496   12822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2
	I0701 05:14:35.407509   12822 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:35.407555   12822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ff:5f:03:65:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2
	I0701 05:14:35.409168   12822 main.go:141] libmachine: STDOUT: 
	I0701 05:14:35.409183   12822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:35.409204   12822 client.go:171] duration metric: took 241.213917ms to LocalClient.Create
	I0701 05:14:37.411414   12822 start.go:128] duration metric: took 2.263025292s to createHost
	I0701 05:14:37.411543   12822 start.go:83] releasing machines lock for "kubenet-731000", held for 2.263193583s
	W0701 05:14:37.411631   12822 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:37.424731   12822 out.go:177] * Deleting "kubenet-731000" in qemu2 ...
	W0701 05:14:37.449826   12822 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:37.449855   12822 start.go:728] Will try again in 5 seconds ...
	I0701 05:14:42.452071   12822 start.go:360] acquireMachinesLock for kubenet-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:42.452298   12822 start.go:364] duration metric: took 186.041µs to acquireMachinesLock for "kubenet-731000"
	I0701 05:14:42.452366   12822 start.go:93] Provisioning new machine with config: &{Name:kubenet-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:42.452488   12822 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:42.467879   12822 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:42.502168   12822 start.go:159] libmachine.API.Create for "kubenet-731000" (driver="qemu2")
	I0701 05:14:42.502209   12822 client.go:168] LocalClient.Create starting
	I0701 05:14:42.502309   12822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:42.502366   12822 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:42.502394   12822 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:42.502456   12822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:42.502494   12822 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:42.502503   12822 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:42.503177   12822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:42.637353   12822 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:42.791207   12822 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:42.791219   12822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:42.791448   12822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2
	I0701 05:14:42.801260   12822 main.go:141] libmachine: STDOUT: 
	I0701 05:14:42.801278   12822 main.go:141] libmachine: STDERR: 
	I0701 05:14:42.801328   12822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2 +20000M
	I0701 05:14:42.809330   12822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:42.809344   12822 main.go:141] libmachine: STDERR: 
	I0701 05:14:42.809359   12822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2
	I0701 05:14:42.809363   12822 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:42.809395   12822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:75:af:d1:ac:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/kubenet-731000/disk.qcow2
	I0701 05:14:42.811021   12822 main.go:141] libmachine: STDOUT: 
	I0701 05:14:42.811035   12822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:42.811047   12822 client.go:171] duration metric: took 308.832709ms to LocalClient.Create
	I0701 05:14:44.813262   12822 start.go:128] duration metric: took 2.360710167s to createHost
	I0701 05:14:44.813385   12822 start.go:83] releasing machines lock for "kubenet-731000", held for 2.36105775s
	W0701 05:14:44.813697   12822 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:44.825434   12822 out.go:177] 
	W0701 05:14:44.829371   12822 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:14:44.829397   12822 out.go:239] * 
	* 
	W0701 05:14:44.832046   12822 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:14:44.842354   12822 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.784134667s)

                                                
                                                
-- stdout --
	* [custom-flannel-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-731000" primary control-plane node in "custom-flannel-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:14:47.022156   12933 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:14:47.022291   12933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:47.022294   12933 out.go:304] Setting ErrFile to fd 2...
	I0701 05:14:47.022296   12933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:47.022425   12933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:14:47.023471   12933 out.go:298] Setting JSON to false
	I0701 05:14:47.039565   12933 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8056,"bootTime":1719828031,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:14:47.039633   12933 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:14:47.045197   12933 out.go:177] * [custom-flannel-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:14:47.052082   12933 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:14:47.052146   12933 notify.go:220] Checking for updates...
	I0701 05:14:47.059053   12933 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:14:47.062037   12933 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:14:47.065112   12933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:14:47.068045   12933 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:14:47.071068   12933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:14:47.074421   12933 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:14:47.074486   12933 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:14:47.074529   12933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:14:47.078088   12933 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:14:47.085117   12933 start.go:297] selected driver: qemu2
	I0701 05:14:47.085122   12933 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:14:47.085128   12933 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:14:47.087209   12933 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:14:47.090088   12933 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:14:47.093199   12933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:14:47.093230   12933 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0701 05:14:47.093248   12933 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0701 05:14:47.093277   12933 start.go:340] cluster config:
	{Name:custom-flannel-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:14:47.096675   12933 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:14:47.104063   12933 out.go:177] * Starting "custom-flannel-731000" primary control-plane node in "custom-flannel-731000" cluster
	I0701 05:14:47.108071   12933 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:14:47.108085   12933 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:14:47.108091   12933 cache.go:56] Caching tarball of preloaded images
	I0701 05:14:47.108142   12933 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:14:47.108147   12933 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:14:47.108194   12933 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/custom-flannel-731000/config.json ...
	I0701 05:14:47.108204   12933 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/custom-flannel-731000/config.json: {Name:mke0e58893e92e00b3203d2223f981d7535433d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:14:47.108643   12933 start.go:360] acquireMachinesLock for custom-flannel-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:47.108676   12933 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "custom-flannel-731000"
	I0701 05:14:47.108689   12933 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:47.108717   12933 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:47.113124   12933 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:47.128148   12933 start.go:159] libmachine.API.Create for "custom-flannel-731000" (driver="qemu2")
	I0701 05:14:47.128171   12933 client.go:168] LocalClient.Create starting
	I0701 05:14:47.128234   12933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:47.128263   12933 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:47.128274   12933 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:47.128312   12933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:47.128334   12933 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:47.128340   12933 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:47.128661   12933 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:47.270592   12933 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:47.398524   12933 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:47.398530   12933 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:47.398698   12933 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2
	I0701 05:14:47.408305   12933 main.go:141] libmachine: STDOUT: 
	I0701 05:14:47.408321   12933 main.go:141] libmachine: STDERR: 
	I0701 05:14:47.408371   12933 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2 +20000M
	I0701 05:14:47.416261   12933 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:47.416274   12933 main.go:141] libmachine: STDERR: 
	I0701 05:14:47.416288   12933 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2
	I0701 05:14:47.416291   12933 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:47.416323   12933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:04:ce:8e:87:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2
	I0701 05:14:47.418014   12933 main.go:141] libmachine: STDOUT: 
	I0701 05:14:47.418026   12933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:47.418045   12933 client.go:171] duration metric: took 289.86775ms to LocalClient.Create
	I0701 05:14:49.420305   12933 start.go:128] duration metric: took 2.311503625s to createHost
	I0701 05:14:49.420365   12933 start.go:83] releasing machines lock for "custom-flannel-731000", held for 2.311666084s
	W0701 05:14:49.420427   12933 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:49.428718   12933 out.go:177] * Deleting "custom-flannel-731000" in qemu2 ...
	W0701 05:14:49.451567   12933 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:49.451595   12933 start.go:728] Will try again in 5 seconds ...
	I0701 05:14:54.453734   12933 start.go:360] acquireMachinesLock for custom-flannel-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:54.453840   12933 start.go:364] duration metric: took 86.417µs to acquireMachinesLock for "custom-flannel-731000"
	I0701 05:14:54.453863   12933 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:54.453933   12933 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:54.463166   12933 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:54.482743   12933 start.go:159] libmachine.API.Create for "custom-flannel-731000" (driver="qemu2")
	I0701 05:14:54.482780   12933 client.go:168] LocalClient.Create starting
	I0701 05:14:54.482867   12933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:54.482899   12933 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:54.482909   12933 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:54.482950   12933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:54.482976   12933 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:54.482984   12933 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:54.483403   12933 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:54.610900   12933 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:54.719102   12933 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:54.719107   12933 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:54.719296   12933 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2
	I0701 05:14:54.728504   12933 main.go:141] libmachine: STDOUT: 
	I0701 05:14:54.728524   12933 main.go:141] libmachine: STDERR: 
	I0701 05:14:54.728589   12933 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2 +20000M
	I0701 05:14:54.737295   12933 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:54.737313   12933 main.go:141] libmachine: STDERR: 
	I0701 05:14:54.737330   12933 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2
	I0701 05:14:54.737335   12933 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:54.737367   12933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:18:d4:dd:ea:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/custom-flannel-731000/disk.qcow2
	I0701 05:14:54.739211   12933 main.go:141] libmachine: STDOUT: 
	I0701 05:14:54.739227   12933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:54.739240   12933 client.go:171] duration metric: took 256.454333ms to LocalClient.Create
	I0701 05:14:56.741449   12933 start.go:128] duration metric: took 2.287470166s to createHost
	I0701 05:14:56.741533   12933 start.go:83] releasing machines lock for "custom-flannel-731000", held for 2.287667959s
	W0701 05:14:56.741991   12933 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:14:56.751713   12933 out.go:177] 
	W0701 05:14:56.756833   12933 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:14:56.756873   12933 out.go:239] * 
	* 
	W0701 05:14:56.758303   12933 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:14:56.770696   12933 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.659596916s)

                                                
                                                
-- stdout --
	* [calico-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-731000" primary control-plane node in "calico-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:14:59.138860   13052 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:14:59.138986   13052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:59.138989   13052 out.go:304] Setting ErrFile to fd 2...
	I0701 05:14:59.138991   13052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:14:59.139111   13052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:14:59.140178   13052 out.go:298] Setting JSON to false
	I0701 05:14:59.156719   13052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8068,"bootTime":1719828031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:14:59.156802   13052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:14:59.162369   13052 out.go:177] * [calico-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:14:59.169328   13052 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:14:59.169373   13052 notify.go:220] Checking for updates...
	I0701 05:14:59.174940   13052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:14:59.178341   13052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:14:59.181339   13052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:14:59.184347   13052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:14:59.187284   13052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:14:59.190694   13052 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:14:59.190760   13052 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:14:59.190810   13052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:14:59.194355   13052 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:14:59.201299   13052 start.go:297] selected driver: qemu2
	I0701 05:14:59.201304   13052 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:14:59.201309   13052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:14:59.203663   13052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:14:59.206354   13052 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:14:59.209317   13052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:14:59.209332   13052 cni.go:84] Creating CNI manager for "calico"
	I0701 05:14:59.209335   13052 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0701 05:14:59.209367   13052 start.go:340] cluster config:
	{Name:calico-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:14:59.212743   13052 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:14:59.220192   13052 out.go:177] * Starting "calico-731000" primary control-plane node in "calico-731000" cluster
	I0701 05:14:59.224331   13052 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:14:59.224345   13052 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:14:59.224351   13052 cache.go:56] Caching tarball of preloaded images
	I0701 05:14:59.224413   13052 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:14:59.224418   13052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:14:59.224466   13052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/calico-731000/config.json ...
	I0701 05:14:59.224478   13052 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/calico-731000/config.json: {Name:mkd8cb19d7f6de06c91e1dc5df74b948dfa343f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:14:59.224914   13052 start.go:360] acquireMachinesLock for calico-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:14:59.224944   13052 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "calico-731000"
	I0701 05:14:59.224959   13052 start.go:93] Provisioning new machine with config: &{Name:calico-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:14:59.224985   13052 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:14:59.228318   13052 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:14:59.243965   13052 start.go:159] libmachine.API.Create for "calico-731000" (driver="qemu2")
	I0701 05:14:59.243994   13052 client.go:168] LocalClient.Create starting
	I0701 05:14:59.244067   13052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:14:59.244098   13052 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:59.244106   13052 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:59.244154   13052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:14:59.244177   13052 main.go:141] libmachine: Decoding PEM data...
	I0701 05:14:59.244187   13052 main.go:141] libmachine: Parsing certificate...
	I0701 05:14:59.244692   13052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:14:59.372991   13052 main.go:141] libmachine: Creating SSH key...
	I0701 05:14:59.428136   13052 main.go:141] libmachine: Creating Disk image...
	I0701 05:14:59.428141   13052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:14:59.428304   13052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2
	I0701 05:14:59.437432   13052 main.go:141] libmachine: STDOUT: 
	I0701 05:14:59.437456   13052 main.go:141] libmachine: STDERR: 
	I0701 05:14:59.437502   13052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2 +20000M
	I0701 05:14:59.445529   13052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:14:59.445543   13052 main.go:141] libmachine: STDERR: 
	I0701 05:14:59.445555   13052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2
	I0701 05:14:59.445559   13052 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:14:59.445589   13052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:f0:c6:67:7b:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2
	I0701 05:14:59.447192   13052 main.go:141] libmachine: STDOUT: 
	I0701 05:14:59.447218   13052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:14:59.447235   13052 client.go:171] duration metric: took 203.234459ms to LocalClient.Create
	I0701 05:15:01.449448   13052 start.go:128] duration metric: took 2.224420375s to createHost
	I0701 05:15:01.449519   13052 start.go:83] releasing machines lock for "calico-731000", held for 2.224550167s
	W0701 05:15:01.449612   13052 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:01.455491   13052 out.go:177] * Deleting "calico-731000" in qemu2 ...
	W0701 05:15:01.470391   13052 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:01.470417   13052 start.go:728] Will try again in 5 seconds ...
	I0701 05:15:06.472627   13052 start.go:360] acquireMachinesLock for calico-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:06.472817   13052 start.go:364] duration metric: took 151.5µs to acquireMachinesLock for "calico-731000"
	I0701 05:15:06.472881   13052 start.go:93] Provisioning new machine with config: &{Name:calico-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:06.472931   13052 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:06.482138   13052 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:15:06.503650   13052 start.go:159] libmachine.API.Create for "calico-731000" (driver="qemu2")
	I0701 05:15:06.503690   13052 client.go:168] LocalClient.Create starting
	I0701 05:15:06.503767   13052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:06.503806   13052 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:06.503814   13052 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:06.503855   13052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:06.503889   13052 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:06.503895   13052 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:06.504405   13052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:06.632944   13052 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:06.710969   13052 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:06.710975   13052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:06.711139   13052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2
	I0701 05:15:06.720313   13052 main.go:141] libmachine: STDOUT: 
	I0701 05:15:06.720334   13052 main.go:141] libmachine: STDERR: 
	I0701 05:15:06.720390   13052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2 +20000M
	I0701 05:15:06.728377   13052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:06.728393   13052 main.go:141] libmachine: STDERR: 
	I0701 05:15:06.728405   13052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2
	I0701 05:15:06.728412   13052 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:06.728456   13052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:f2:6c:1d:20:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/calico-731000/disk.qcow2
	I0701 05:15:06.730104   13052 main.go:141] libmachine: STDOUT: 
	I0701 05:15:06.730117   13052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:06.730129   13052 client.go:171] duration metric: took 226.432916ms to LocalClient.Create
	I0701 05:15:08.732335   13052 start.go:128] duration metric: took 2.259359375s to createHost
	I0701 05:15:08.732410   13052 start.go:83] releasing machines lock for "calico-731000", held for 2.259564542s
	W0701 05:15:08.732791   13052 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:08.741270   13052 out.go:177] 
	W0701 05:15:08.747436   13052 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:08.747471   13052 out.go:239] * 
	* 
	W0701 05:15:08.749999   13052 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:15:08.758416   13052 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-731000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.733264209s)

                                                
                                                
-- stdout --
	* [false-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-731000" primary control-plane node in "false-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:11.221943   13169 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:11.222060   13169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:11.222063   13169 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:11.222065   13169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:11.222202   13169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:11.223230   13169 out.go:298] Setting JSON to false
	I0701 05:15:11.239561   13169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8080,"bootTime":1719828031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:15:11.239649   13169 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:15:11.245646   13169 out.go:177] * [false-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:15:11.252736   13169 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:15:11.252821   13169 notify.go:220] Checking for updates...
	I0701 05:15:11.258689   13169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:15:11.261717   13169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:15:11.262841   13169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:15:11.265752   13169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:15:11.268767   13169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:15:11.272002   13169 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:15:11.272064   13169 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:15:11.272113   13169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:15:11.276685   13169 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:15:11.283749   13169 start.go:297] selected driver: qemu2
	I0701 05:15:11.283757   13169 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:15:11.283764   13169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:15:11.286064   13169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:15:11.288710   13169 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:15:11.291770   13169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:15:11.291786   13169 cni.go:84] Creating CNI manager for "false"
	I0701 05:15:11.291806   13169 start.go:340] cluster config:
	{Name:false-731000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:false-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:11.295231   13169 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:11.302683   13169 out.go:177] * Starting "false-731000" primary control-plane node in "false-731000" cluster
	I0701 05:15:11.306731   13169 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:15:11.306751   13169 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:15:11.306760   13169 cache.go:56] Caching tarball of preloaded images
	I0701 05:15:11.306840   13169 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:15:11.306846   13169 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:15:11.306896   13169 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/false-731000/config.json ...
	I0701 05:15:11.306906   13169 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/false-731000/config.json: {Name:mkeb49b6c612c9bc291ccd4bf3ecd3fe0d5d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:15:11.307209   13169 start.go:360] acquireMachinesLock for false-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:11.307239   13169 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "false-731000"
	I0701 05:15:11.307251   13169 start.go:93] Provisioning new machine with config: &{Name:false-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:11.307282   13169 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:11.311769   13169 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:15:11.327086   13169 start.go:159] libmachine.API.Create for "false-731000" (driver="qemu2")
	I0701 05:15:11.327107   13169 client.go:168] LocalClient.Create starting
	I0701 05:15:11.327167   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:11.327197   13169 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:11.327207   13169 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:11.327241   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:11.327263   13169 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:11.327271   13169 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:11.327731   13169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:11.457236   13169 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:11.570539   13169 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:11.570545   13169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:11.570707   13169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2
	I0701 05:15:11.580367   13169 main.go:141] libmachine: STDOUT: 
	I0701 05:15:11.580385   13169 main.go:141] libmachine: STDERR: 
	I0701 05:15:11.580437   13169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2 +20000M
	I0701 05:15:11.588362   13169 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:11.588386   13169 main.go:141] libmachine: STDERR: 
	I0701 05:15:11.588400   13169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2
	I0701 05:15:11.588404   13169 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:11.588432   13169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:af:e6:77:b9:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2
	I0701 05:15:11.590094   13169 main.go:141] libmachine: STDOUT: 
	I0701 05:15:11.590107   13169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:11.590127   13169 client.go:171] duration metric: took 263.011083ms to LocalClient.Create
	I0701 05:15:13.592347   13169 start.go:128] duration metric: took 2.285021709s to createHost
	I0701 05:15:13.592437   13169 start.go:83] releasing machines lock for "false-731000", held for 2.285175084s
	W0701 05:15:13.592491   13169 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:13.597441   13169 out.go:177] * Deleting "false-731000" in qemu2 ...
	W0701 05:15:13.621994   13169 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:13.622027   13169 start.go:728] Will try again in 5 seconds ...
	I0701 05:15:18.623370   13169 start.go:360] acquireMachinesLock for false-731000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:18.623453   13169 start.go:364] duration metric: took 63.792µs to acquireMachinesLock for "false-731000"
	I0701 05:15:18.623466   13169 start.go:93] Provisioning new machine with config: &{Name:false-731000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-731000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:18.623518   13169 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:18.628718   13169 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 05:15:18.644287   13169 start.go:159] libmachine.API.Create for "false-731000" (driver="qemu2")
	I0701 05:15:18.644317   13169 client.go:168] LocalClient.Create starting
	I0701 05:15:18.644402   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:18.644443   13169 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:18.644457   13169 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:18.644494   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:18.644517   13169 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:18.644524   13169 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:18.644839   13169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:18.772212   13169 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:18.863763   13169 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:18.863770   13169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:18.863991   13169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2
	I0701 05:15:18.873267   13169 main.go:141] libmachine: STDOUT: 
	I0701 05:15:18.873297   13169 main.go:141] libmachine: STDERR: 
	I0701 05:15:18.873342   13169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2 +20000M
	I0701 05:15:18.881291   13169 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:18.881306   13169 main.go:141] libmachine: STDERR: 
	I0701 05:15:18.881318   13169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2
	I0701 05:15:18.881322   13169 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:18.881364   13169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:33:33:1d:3b:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/false-731000/disk.qcow2
	I0701 05:15:18.882995   13169 main.go:141] libmachine: STDOUT: 
	I0701 05:15:18.883014   13169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:18.883028   13169 client.go:171] duration metric: took 238.705917ms to LocalClient.Create
	I0701 05:15:20.885256   13169 start.go:128] duration metric: took 2.261694083s to createHost
	I0701 05:15:20.885384   13169 start.go:83] releasing machines lock for "false-731000", held for 2.261904834s
	W0701 05:15:20.885731   13169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:20.895568   13169 out.go:177] 
	W0701 05:15:20.900689   13169 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:20.900714   13169 out.go:239] * 
	* 
	W0701 05:15:20.903475   13169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:15:20.912555   13169 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-821000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-821000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.904437667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-821000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-821000" primary control-plane node in "old-k8s-version-821000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-821000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:23.113879   13286 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:23.114127   13286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:23.114133   13286 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:23.114135   13286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:23.114333   13286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:23.115765   13286 out.go:298] Setting JSON to false
	I0701 05:15:23.132425   13286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8092,"bootTime":1719828031,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:15:23.132519   13286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:15:23.138807   13286 out.go:177] * [old-k8s-version-821000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:15:23.145734   13286 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:15:23.145767   13286 notify.go:220] Checking for updates...
	I0701 05:15:23.152658   13286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:15:23.155667   13286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:15:23.158764   13286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:15:23.161713   13286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:15:23.164681   13286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:15:23.168113   13286 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:15:23.168189   13286 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:15:23.168241   13286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:15:23.172622   13286 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:15:23.179711   13286 start.go:297] selected driver: qemu2
	I0701 05:15:23.179717   13286 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:15:23.179723   13286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:15:23.181907   13286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:15:23.184678   13286 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:15:23.187806   13286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:15:23.187849   13286 cni.go:84] Creating CNI manager for ""
	I0701 05:15:23.187856   13286 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0701 05:15:23.187896   13286 start.go:340] cluster config:
	{Name:old-k8s-version-821000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:23.191570   13286 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:23.198631   13286 out.go:177] * Starting "old-k8s-version-821000" primary control-plane node in "old-k8s-version-821000" cluster
	I0701 05:15:23.202708   13286 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 05:15:23.202724   13286 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 05:15:23.202732   13286 cache.go:56] Caching tarball of preloaded images
	I0701 05:15:23.202797   13286 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:15:23.202802   13286 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0701 05:15:23.202853   13286 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/old-k8s-version-821000/config.json ...
	I0701 05:15:23.202864   13286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/old-k8s-version-821000/config.json: {Name:mk0b011979232867b1ba4a81ecb4b700b9bcd082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:15:23.203322   13286 start.go:360] acquireMachinesLock for old-k8s-version-821000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:23.203356   13286 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "old-k8s-version-821000"
	I0701 05:15:23.203369   13286 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:23.203401   13286 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:23.208785   13286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:15:23.226215   13286 start.go:159] libmachine.API.Create for "old-k8s-version-821000" (driver="qemu2")
	I0701 05:15:23.226238   13286 client.go:168] LocalClient.Create starting
	I0701 05:15:23.226301   13286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:23.226332   13286 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:23.226341   13286 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:23.226377   13286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:23.226399   13286 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:23.226405   13286 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:23.226872   13286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:23.355669   13286 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:23.583696   13286 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:23.583709   13286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:23.583889   13286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:23.593483   13286 main.go:141] libmachine: STDOUT: 
	I0701 05:15:23.593506   13286 main.go:141] libmachine: STDERR: 
	I0701 05:15:23.593560   13286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2 +20000M
	I0701 05:15:23.601574   13286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:23.601587   13286 main.go:141] libmachine: STDERR: 
	I0701 05:15:23.601601   13286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:23.601606   13286 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:23.601649   13286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:01:06:fb:2c:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:23.603279   13286 main.go:141] libmachine: STDOUT: 
	I0701 05:15:23.603293   13286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:23.603313   13286 client.go:171] duration metric: took 377.068833ms to LocalClient.Create
	I0701 05:15:25.605412   13286 start.go:128] duration metric: took 2.401986542s to createHost
	I0701 05:15:25.605436   13286 start.go:83] releasing machines lock for "old-k8s-version-821000", held for 2.402060667s
	W0701 05:15:25.605478   13286 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:25.617176   13286 out.go:177] * Deleting "old-k8s-version-821000" in qemu2 ...
	W0701 05:15:25.630018   13286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:25.630025   13286 start.go:728] Will try again in 5 seconds ...
	I0701 05:15:30.632299   13286 start.go:360] acquireMachinesLock for old-k8s-version-821000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:30.632567   13286 start.go:364] duration metric: took 206.708µs to acquireMachinesLock for "old-k8s-version-821000"
	I0701 05:15:30.632606   13286 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:30.632785   13286 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:30.640114   13286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:15:30.669290   13286 start.go:159] libmachine.API.Create for "old-k8s-version-821000" (driver="qemu2")
	I0701 05:15:30.669336   13286 client.go:168] LocalClient.Create starting
	I0701 05:15:30.669431   13286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:30.669489   13286 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:30.669502   13286 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:30.669547   13286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:30.669581   13286 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:30.669598   13286 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:30.670003   13286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:30.801407   13286 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:30.928966   13286 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:30.928974   13286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:30.929156   13286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:30.938620   13286 main.go:141] libmachine: STDOUT: 
	I0701 05:15:30.938639   13286 main.go:141] libmachine: STDERR: 
	I0701 05:15:30.938695   13286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2 +20000M
	I0701 05:15:30.946766   13286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:30.946781   13286 main.go:141] libmachine: STDERR: 
	I0701 05:15:30.946790   13286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:30.946793   13286 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:30.946835   13286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:85:bc:0c:88:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:30.948457   13286 main.go:141] libmachine: STDOUT: 
	I0701 05:15:30.948473   13286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:30.948485   13286 client.go:171] duration metric: took 279.141708ms to LocalClient.Create
	I0701 05:15:32.950616   13286 start.go:128] duration metric: took 2.317795375s to createHost
	I0701 05:15:32.950650   13286 start.go:83] releasing machines lock for "old-k8s-version-821000", held for 2.318056417s
	W0701 05:15:32.950878   13286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:32.962299   13286 out.go:177] 
	W0701 05:15:32.966325   13286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:32.966350   13286 out.go:239] * 
	* 
	W0701 05:15:32.968065   13286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:15:32.976297   13286 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-821000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (53.778708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-821000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-821000 create -f testdata/busybox.yaml: exit status 1 (29.646709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-821000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-821000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (29.677667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (29.815042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-821000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-821000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-821000 describe deploy/metrics-server -n kube-system: exit status 1 (26.759834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-821000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-821000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (30.026834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-821000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-821000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.182496375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-821000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-821000" primary control-plane node in "old-k8s-version-821000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-821000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-821000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:36.488184   13343 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:36.488322   13343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:36.488325   13343 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:36.488328   13343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:36.488467   13343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:36.489510   13343 out.go:298] Setting JSON to false
	I0701 05:15:36.505890   13343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8105,"bootTime":1719828031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:15:36.505961   13343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:15:36.509705   13343 out.go:177] * [old-k8s-version-821000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:15:36.516666   13343 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:15:36.516723   13343 notify.go:220] Checking for updates...
	I0701 05:15:36.523688   13343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:15:36.526638   13343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:15:36.529693   13343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:15:36.532581   13343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:15:36.535624   13343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:15:36.538926   13343 config.go:182] Loaded profile config "old-k8s-version-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0701 05:15:36.540284   13343 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0701 05:15:36.542587   13343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:15:36.546632   13343 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:15:36.551639   13343 start.go:297] selected driver: qemu2
	I0701 05:15:36.551652   13343 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:36.551706   13343 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:15:36.553963   13343 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:15:36.553984   13343 cni.go:84] Creating CNI manager for ""
	I0701 05:15:36.553991   13343 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0701 05:15:36.554018   13343 start.go:340] cluster config:
	{Name:old-k8s-version-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-821000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:36.557342   13343 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:36.564628   13343 out.go:177] * Starting "old-k8s-version-821000" primary control-plane node in "old-k8s-version-821000" cluster
	I0701 05:15:36.568611   13343 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 05:15:36.568623   13343 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 05:15:36.568629   13343 cache.go:56] Caching tarball of preloaded images
	I0701 05:15:36.568681   13343 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:15:36.568686   13343 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0701 05:15:36.568731   13343 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/old-k8s-version-821000/config.json ...
	I0701 05:15:36.569279   13343 start.go:360] acquireMachinesLock for old-k8s-version-821000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:36.569310   13343 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "old-k8s-version-821000"
	I0701 05:15:36.569319   13343 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:15:36.569326   13343 fix.go:54] fixHost starting: 
	I0701 05:15:36.569436   13343 fix.go:112] recreateIfNeeded on old-k8s-version-821000: state=Stopped err=<nil>
	W0701 05:15:36.569444   13343 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:15:36.573605   13343 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-821000" ...
	I0701 05:15:36.581602   13343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:85:bc:0c:88:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:36.583485   13343 main.go:141] libmachine: STDOUT: 
	I0701 05:15:36.583512   13343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:36.583538   13343 fix.go:56] duration metric: took 14.212416ms for fixHost
	I0701 05:15:36.583543   13343 start.go:83] releasing machines lock for "old-k8s-version-821000", held for 14.228709ms
	W0701 05:15:36.583549   13343 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:36.583579   13343 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:36.583583   13343 start.go:728] Will try again in 5 seconds ...
	I0701 05:15:41.585893   13343 start.go:360] acquireMachinesLock for old-k8s-version-821000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:41.586465   13343 start.go:364] duration metric: took 428.792µs to acquireMachinesLock for "old-k8s-version-821000"
	I0701 05:15:41.586572   13343 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:15:41.586593   13343 fix.go:54] fixHost starting: 
	I0701 05:15:41.587349   13343 fix.go:112] recreateIfNeeded on old-k8s-version-821000: state=Stopped err=<nil>
	W0701 05:15:41.587377   13343 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:15:41.593177   13343 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-821000" ...
	I0701 05:15:41.598264   13343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:85:bc:0c:88:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/old-k8s-version-821000/disk.qcow2
	I0701 05:15:41.607914   13343 main.go:141] libmachine: STDOUT: 
	I0701 05:15:41.607984   13343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:41.608093   13343 fix.go:56] duration metric: took 21.498667ms for fixHost
	I0701 05:15:41.608117   13343 start.go:83] releasing machines lock for "old-k8s-version-821000", held for 21.627459ms
	W0701 05:15:41.608297   13343 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:41.616033   13343 out.go:177] 
	W0701 05:15:41.620101   13343 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:41.620127   13343 out.go:239] * 
	* 
	W0701 05:15:41.622553   13343 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:15:41.629038   13343 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-821000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (65.014958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-821000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (32.84575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-821000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-821000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-821000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.134ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-821000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-821000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (29.99775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-821000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (29.461917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-821000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-821000 --alsologtostderr -v=1: exit status 83 (41.957ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-821000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:41.900226   13362 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:41.901177   13362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:41.901181   13362 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:41.901184   13362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:41.901350   13362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:41.901553   13362 out.go:298] Setting JSON to false
	I0701 05:15:41.901563   13362 mustload.go:65] Loading cluster: old-k8s-version-821000
	I0701 05:15:41.901779   13362 config.go:182] Loaded profile config "old-k8s-version-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0701 05:15:41.906466   13362 out.go:177] * The control-plane node old-k8s-version-821000 host is not running: state=Stopped
	I0701 05:15:41.910490   13362 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-821000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-821000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (29.857459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (29.785167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-340000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-340000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.891057167s)

                                                
                                                
-- stdout --
	* [no-preload-340000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-340000" primary control-plane node in "no-preload-340000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-340000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:42.213091   13379 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:42.213326   13379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:42.213330   13379 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:42.213332   13379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:42.213450   13379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:42.214731   13379 out.go:298] Setting JSON to false
	I0701 05:15:42.231372   13379 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8111,"bootTime":1719828031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:15:42.231442   13379 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:15:42.236456   13379 out.go:177] * [no-preload-340000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:15:42.243455   13379 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:15:42.243490   13379 notify.go:220] Checking for updates...
	I0701 05:15:42.250452   13379 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:15:42.253459   13379 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:15:42.256454   13379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:15:42.259394   13379 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:15:42.262396   13379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:15:42.265697   13379 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:15:42.265761   13379 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0701 05:15:42.265815   13379 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:15:42.268384   13379 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:15:42.275434   13379 start.go:297] selected driver: qemu2
	I0701 05:15:42.275441   13379 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:15:42.275449   13379 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:15:42.277733   13379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:15:42.278920   13379 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:15:42.281511   13379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:15:42.281550   13379 cni.go:84] Creating CNI manager for ""
	I0701 05:15:42.281558   13379 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:15:42.281562   13379 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:15:42.281599   13379 start.go:340] cluster config:
	{Name:no-preload-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:42.285030   13379 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.292410   13379 out.go:177] * Starting "no-preload-340000" primary control-plane node in "no-preload-340000" cluster
	I0701 05:15:42.296395   13379 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:15:42.296448   13379 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/no-preload-340000/config.json ...
	I0701 05:15:42.296462   13379 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/no-preload-340000/config.json: {Name:mk8d7af894d778bb41e137c4baeb5d877f52a698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:15:42.296467   13379 cache.go:107] acquiring lock: {Name:mkb28b7d830b0b18ece9878c83ddd303ab5bb3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296467   13379 cache.go:107] acquiring lock: {Name:mk97c1ddba98bbf3dedcf194dfbfdb0e98232034 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296487   13379 cache.go:107] acquiring lock: {Name:mk0617bb20099dc0bee05a1bc72513009715a467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296515   13379 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 05:15:42.296521   13379 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 55.708µs
	I0701 05:15:42.296526   13379 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 05:15:42.296533   13379 cache.go:107] acquiring lock: {Name:mk279c4200e7a6fce42d6790bc23ef944f49cc0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296626   13379 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0701 05:15:42.296661   13379 cache.go:107] acquiring lock: {Name:mkd7e395ca1bc1051a1b019df04a499af2a7e8d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296666   13379 cache.go:107] acquiring lock: {Name:mk1e0e69497cf4a3e35b4f6d9e800ee2f5704dd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296669   13379 cache.go:107] acquiring lock: {Name:mk57e459ee4aa581e13a5b1e702f29d4f4dd896f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296718   13379 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0701 05:15:42.296758   13379 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0701 05:15:42.296805   13379 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0701 05:15:42.296826   13379 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0701 05:15:42.296853   13379 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0701 05:15:42.296892   13379 start.go:360] acquireMachinesLock for no-preload-340000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:42.296889   13379 cache.go:107] acquiring lock: {Name:mk72ec3a83ced91701e46386f69e5cf5616cb14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:42.296926   13379 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "no-preload-340000"
	I0701 05:15:42.296941   13379 start.go:93] Provisioning new machine with config: &{Name:no-preload-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:42.296968   13379 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:42.297014   13379 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0701 05:15:42.305422   13379 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:15:42.309803   13379 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0701 05:15:42.309826   13379 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0701 05:15:42.309863   13379 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0701 05:15:42.313708   13379 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0701 05:15:42.313749   13379 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0701 05:15:42.313787   13379 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0701 05:15:42.314321   13379 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0701 05:15:42.321481   13379 start.go:159] libmachine.API.Create for "no-preload-340000" (driver="qemu2")
	I0701 05:15:42.321500   13379 client.go:168] LocalClient.Create starting
	I0701 05:15:42.321564   13379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:42.321593   13379 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:42.321601   13379 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:42.321645   13379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:42.321667   13379 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:42.321680   13379 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:42.322055   13379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:42.456921   13379 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:42.529727   13379 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:42.529752   13379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:42.529958   13379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:42.539969   13379 main.go:141] libmachine: STDOUT: 
	I0701 05:15:42.539984   13379 main.go:141] libmachine: STDERR: 
	I0701 05:15:42.540031   13379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2 +20000M
	I0701 05:15:42.549431   13379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:42.549449   13379 main.go:141] libmachine: STDERR: 
	I0701 05:15:42.549461   13379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:42.549466   13379 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:42.549494   13379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:37:52:5e:e2:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:42.551499   13379 main.go:141] libmachine: STDOUT: 
	I0701 05:15:42.551516   13379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:42.551537   13379 client.go:171] duration metric: took 230.029834ms to LocalClient.Create
	I0701 05:15:42.669329   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0701 05:15:42.669933   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2
	I0701 05:15:42.690757   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0701 05:15:42.695910   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2
	I0701 05:15:42.713216   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0701 05:15:42.755969   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0701 05:15:42.772463   13379 cache.go:162] opening:  /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2
	I0701 05:15:42.854718   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0701 05:15:42.854742   13379 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 558.20525ms
	I0701 05:15:42.854748   13379 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0701 05:15:44.551892   13379 start.go:128] duration metric: took 2.25489325s to createHost
	I0701 05:15:44.551917   13379 start.go:83] releasing machines lock for "no-preload-340000", held for 2.254969958s
	W0701 05:15:44.551955   13379 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:44.562784   13379 out.go:177] * Deleting "no-preload-340000" in qemu2 ...
	W0701 05:15:44.578618   13379 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:44.578636   13379 start.go:728] Will try again in 5 seconds ...
	I0701 05:15:45.920585   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0701 05:15:45.920598   13379 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 3.624101083s
	I0701 05:15:45.920608   13379 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0701 05:15:46.007673   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0701 05:15:46.007682   13379 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 3.711022708s
	I0701 05:15:46.007697   13379 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0701 05:15:46.171043   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0701 05:15:46.171058   13379 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.874332625s
	I0701 05:15:46.171066   13379 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0701 05:15:46.473251   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0701 05:15:46.473269   13379 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 4.176613167s
	I0701 05:15:46.473280   13379 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0701 05:15:47.263630   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0701 05:15:47.263652   13379 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 4.967156458s
	I0701 05:15:47.263665   13379 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0701 05:15:49.554804   13379 cache.go:157] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0701 05:15:49.554850   13379 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.25815475s
	I0701 05:15:49.554876   13379 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0701 05:15:49.554908   13379 cache.go:87] Successfully saved all images to host disk.
	I0701 05:15:49.580800   13379 start.go:360] acquireMachinesLock for no-preload-340000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:49.581162   13379 start.go:364] duration metric: took 300.417µs to acquireMachinesLock for "no-preload-340000"
	I0701 05:15:49.581265   13379 start.go:93] Provisioning new machine with config: &{Name:no-preload-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:49.581481   13379 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:49.591124   13379 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:15:49.640653   13379 start.go:159] libmachine.API.Create for "no-preload-340000" (driver="qemu2")
	I0701 05:15:49.640719   13379 client.go:168] LocalClient.Create starting
	I0701 05:15:49.640836   13379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:49.640906   13379 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:49.640927   13379 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:49.640990   13379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:49.641043   13379 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:49.641059   13379 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:49.641603   13379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:49.782358   13379 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:50.012972   13379 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:50.012983   13379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:50.013181   13379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:50.023146   13379 main.go:141] libmachine: STDOUT: 
	I0701 05:15:50.023172   13379 main.go:141] libmachine: STDERR: 
	I0701 05:15:50.023231   13379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2 +20000M
	I0701 05:15:50.031884   13379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:50.031900   13379 main.go:141] libmachine: STDERR: 
	I0701 05:15:50.031917   13379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:50.031923   13379 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:50.031971   13379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:4b:a2:81:f7:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:50.033834   13379 main.go:141] libmachine: STDOUT: 
	I0701 05:15:50.033847   13379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:50.033859   13379 client.go:171] duration metric: took 393.130375ms to LocalClient.Create
	I0701 05:15:52.036042   13379 start.go:128] duration metric: took 2.454512667s to createHost
	I0701 05:15:52.036086   13379 start.go:83] releasing machines lock for "no-preload-340000", held for 2.454889042s
	W0701 05:15:52.036375   13379 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-340000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-340000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:52.050880   13379 out.go:177] 
	W0701 05:15:52.055960   13379 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:52.056004   13379 out.go:239] * 
	* 
	W0701 05:15:52.058069   13379 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:15:52.066890   13379 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-340000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (50.185792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-416000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-416000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (11.116033041s)

                                                
                                                
-- stdout --
	* [embed-certs-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-416000" primary control-plane node in "embed-certs-416000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-416000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:50.727097   13428 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:50.727236   13428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:50.727240   13428 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:50.727242   13428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:50.727369   13428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:50.728476   13428 out.go:298] Setting JSON to false
	I0701 05:15:50.744568   13428 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8119,"bootTime":1719828031,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:15:50.744631   13428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:15:50.749325   13428 out.go:177] * [embed-certs-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:15:50.756332   13428 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:15:50.756418   13428 notify.go:220] Checking for updates...
	I0701 05:15:50.762281   13428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:15:50.765240   13428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:15:50.768323   13428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:15:50.771282   13428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:15:50.786246   13428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:15:50.790595   13428 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:15:50.790669   13428 config.go:182] Loaded profile config "no-preload-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:15:50.790729   13428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:15:50.795318   13428 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:15:50.803280   13428 start.go:297] selected driver: qemu2
	I0701 05:15:50.803285   13428 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:15:50.803291   13428 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:15:50.805587   13428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:15:50.810309   13428 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:15:50.813437   13428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:15:50.813473   13428 cni.go:84] Creating CNI manager for ""
	I0701 05:15:50.813482   13428 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:15:50.813487   13428 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:15:50.813525   13428 start.go:340] cluster config:
	{Name:embed-certs-416000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:50.817438   13428 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:50.825293   13428 out.go:177] * Starting "embed-certs-416000" primary control-plane node in "embed-certs-416000" cluster
	I0701 05:15:50.829263   13428 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:15:50.829279   13428 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:15:50.829287   13428 cache.go:56] Caching tarball of preloaded images
	I0701 05:15:50.829349   13428 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:15:50.829356   13428 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:15:50.829430   13428 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/embed-certs-416000/config.json ...
	I0701 05:15:50.829442   13428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/embed-certs-416000/config.json: {Name:mkc95b27abc6bc9cb4b9822864ba17334c786331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:15:50.829687   13428 start.go:360] acquireMachinesLock for embed-certs-416000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:52.036246   13428 start.go:364] duration metric: took 1.2064785s to acquireMachinesLock for "embed-certs-416000"
	I0701 05:15:52.036454   13428 start.go:93] Provisioning new machine with config: &{Name:embed-certs-416000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:52.036691   13428 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:52.046914   13428 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:15:52.096659   13428 start.go:159] libmachine.API.Create for "embed-certs-416000" (driver="qemu2")
	I0701 05:15:52.096711   13428 client.go:168] LocalClient.Create starting
	I0701 05:15:52.096834   13428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:52.096887   13428 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:52.096902   13428 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:52.096968   13428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:52.097012   13428 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:52.097031   13428 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:52.097631   13428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:52.243549   13428 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:52.351368   13428 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:52.351377   13428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:52.351575   13428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:15:52.361528   13428 main.go:141] libmachine: STDOUT: 
	I0701 05:15:52.361549   13428 main.go:141] libmachine: STDERR: 
	I0701 05:15:52.361608   13428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2 +20000M
	I0701 05:15:52.370461   13428 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:52.370482   13428 main.go:141] libmachine: STDERR: 
	I0701 05:15:52.370506   13428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:15:52.370510   13428 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:52.370543   13428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:68:c6:d0:5f:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:15:52.372410   13428 main.go:141] libmachine: STDOUT: 
	I0701 05:15:52.372427   13428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:52.372447   13428 client.go:171] duration metric: took 275.72775ms to LocalClient.Create
	I0701 05:15:54.374828   13428 start.go:128] duration metric: took 2.338061333s to createHost
	I0701 05:15:54.375018   13428 start.go:83] releasing machines lock for "embed-certs-416000", held for 2.338721042s
	W0701 05:15:54.375070   13428 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:54.387264   13428 out.go:177] * Deleting "embed-certs-416000" in qemu2 ...
	W0701 05:15:54.410364   13428 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:54.410397   13428 start.go:728] Will try again in 5 seconds ...
	I0701 05:15:59.412638   13428 start.go:360] acquireMachinesLock for embed-certs-416000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:59.412974   13428 start.go:364] duration metric: took 266.291µs to acquireMachinesLock for "embed-certs-416000"
	I0701 05:15:59.413088   13428 start.go:93] Provisioning new machine with config: &{Name:embed-certs-416000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:15:59.413361   13428 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:15:59.422848   13428 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:15:59.471712   13428 start.go:159] libmachine.API.Create for "embed-certs-416000" (driver="qemu2")
	I0701 05:15:59.471759   13428 client.go:168] LocalClient.Create starting
	I0701 05:15:59.471923   13428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:15:59.471995   13428 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:59.472013   13428 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:59.472079   13428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:15:59.472127   13428 main.go:141] libmachine: Decoding PEM data...
	I0701 05:15:59.472149   13428 main.go:141] libmachine: Parsing certificate...
	I0701 05:15:59.472726   13428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:15:59.613611   13428 main.go:141] libmachine: Creating SSH key...
	I0701 05:15:59.740515   13428 main.go:141] libmachine: Creating Disk image...
	I0701 05:15:59.740521   13428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:15:59.740684   13428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:15:59.750110   13428 main.go:141] libmachine: STDOUT: 
	I0701 05:15:59.750129   13428 main.go:141] libmachine: STDERR: 
	I0701 05:15:59.750178   13428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2 +20000M
	I0701 05:15:59.758134   13428 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:15:59.758148   13428 main.go:141] libmachine: STDERR: 
	I0701 05:15:59.758162   13428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:15:59.758167   13428 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:15:59.758202   13428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4c:88:5a:16:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:15:59.759806   13428 main.go:141] libmachine: STDOUT: 
	I0701 05:15:59.759819   13428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:59.759831   13428 client.go:171] duration metric: took 288.065417ms to LocalClient.Create
	I0701 05:16:01.762034   13428 start.go:128] duration metric: took 2.3486275s to createHost
	I0701 05:16:01.762102   13428 start.go:83] releasing machines lock for "embed-certs-416000", held for 2.349092041s
	W0701 05:16:01.762526   13428 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:01.776074   13428 out.go:177] 
	W0701 05:16:01.780097   13428 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:01.780122   13428 out.go:239] * 
	* 
	W0701 05:16:01.782980   13428 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:01.791043   13428 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-416000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (63.019042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-340000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-340000 create -f testdata/busybox.yaml: exit status 1 (31.177833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-340000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-340000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (34.409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (34.906666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-340000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-340000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-340000 describe deploy/metrics-server -n kube-system: exit status 1 (27.78075ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-340000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-340000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (30.734583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-340000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-340000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.517103583s)

                                                
                                                
-- stdout --
	* [no-preload-340000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-340000" primary control-plane node in "no-preload-340000" cluster
	* Restarting existing qemu2 VM for "no-preload-340000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-340000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:15:56.337828   13476 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:15:56.337969   13476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:56.337973   13476 out.go:304] Setting ErrFile to fd 2...
	I0701 05:15:56.337975   13476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:15:56.338104   13476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:15:56.339093   13476 out.go:298] Setting JSON to false
	I0701 05:15:56.355236   13476 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8125,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:15:56.355298   13476 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:15:56.359074   13476 out.go:177] * [no-preload-340000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:15:56.367025   13476 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:15:56.367037   13476 notify.go:220] Checking for updates...
	I0701 05:15:56.374046   13476 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:15:56.376976   13476 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:15:56.379978   13476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:15:56.383026   13476 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:15:56.386033   13476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:15:56.389262   13476 config.go:182] Loaded profile config "no-preload-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:15:56.389502   13476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:15:56.393992   13476 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:15:56.400981   13476 start.go:297] selected driver: qemu2
	I0701 05:15:56.400988   13476 start.go:901] validating driver "qemu2" against &{Name:no-preload-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:no-preload-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:56.401059   13476 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:15:56.403448   13476 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:15:56.403483   13476 cni.go:84] Creating CNI manager for ""
	I0701 05:15:56.403490   13476 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:15:56.403515   13476 start.go:340] cluster config:
	{Name:no-preload-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-340000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:15:56.407172   13476 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.414873   13476 out.go:177] * Starting "no-preload-340000" primary control-plane node in "no-preload-340000" cluster
	I0701 05:15:56.419001   13476 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:15:56.419072   13476 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/no-preload-340000/config.json ...
	I0701 05:15:56.419091   13476 cache.go:107] acquiring lock: {Name:mk97c1ddba98bbf3dedcf194dfbfdb0e98232034 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419091   13476 cache.go:107] acquiring lock: {Name:mkb28b7d830b0b18ece9878c83ddd303ab5bb3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419120   13476 cache.go:107] acquiring lock: {Name:mk279c4200e7a6fce42d6790bc23ef944f49cc0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419153   13476 cache.go:107] acquiring lock: {Name:mk57e459ee4aa581e13a5b1e702f29d4f4dd896f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419163   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0701 05:15:56.419165   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 05:15:56.419170   13476 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 80.5µs
	I0701 05:15:56.419174   13476 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.167µs
	I0701 05:15:56.419232   13476 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 05:15:56.419185   13476 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0701 05:15:56.419198   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0701 05:15:56.419242   13476 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 90.875µs
	I0701 05:15:56.419246   13476 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0701 05:15:56.419203   13476 cache.go:107] acquiring lock: {Name:mk72ec3a83ced91701e46386f69e5cf5616cb14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419257   13476 cache.go:107] acquiring lock: {Name:mk0617bb20099dc0bee05a1bc72513009715a467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419097   13476 cache.go:107] acquiring lock: {Name:mkd7e395ca1bc1051a1b019df04a499af2a7e8d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419219   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0701 05:15:56.419292   13476 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 171.959µs
	I0701 05:15:56.419215   13476 cache.go:107] acquiring lock: {Name:mk1e0e69497cf4a3e35b4f6d9e800ee2f5704dd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:15:56.419276   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0701 05:15:56.419311   13476 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 107.291µs
	I0701 05:15:56.419314   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0701 05:15:56.419328   13476 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 93.167µs
	I0701 05:15:56.419333   13476 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0701 05:15:56.419314   13476 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0701 05:15:56.419296   13476 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0701 05:15:56.419314   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0701 05:15:56.419345   13476 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 256.875µs
	I0701 05:15:56.419348   13476 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0701 05:15:56.419351   13476 cache.go:115] /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0701 05:15:56.419357   13476 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 141.667µs
	I0701 05:15:56.419361   13476 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0701 05:15:56.419366   13476 cache.go:87] Successfully saved all images to host disk.
	I0701 05:15:56.419509   13476 start.go:360] acquireMachinesLock for no-preload-340000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:15:56.419544   13476 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "no-preload-340000"
	I0701 05:15:56.419554   13476 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:15:56.419558   13476 fix.go:54] fixHost starting: 
	I0701 05:15:56.419676   13476 fix.go:112] recreateIfNeeded on no-preload-340000: state=Stopped err=<nil>
	W0701 05:15:56.419685   13476 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:15:56.426911   13476 out.go:177] * Restarting existing qemu2 VM for "no-preload-340000" ...
	I0701 05:15:56.431004   13476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:4b:a2:81:f7:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:15:56.432975   13476 main.go:141] libmachine: STDOUT: 
	I0701 05:15:56.432996   13476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:15:56.433023   13476 fix.go:56] duration metric: took 13.463666ms for fixHost
	I0701 05:15:56.433027   13476 start.go:83] releasing machines lock for "no-preload-340000", held for 13.478291ms
	W0701 05:15:56.433034   13476 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:15:56.433060   13476 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:15:56.433066   13476 start.go:728] Will try again in 5 seconds ...
	I0701 05:16:01.435274   13476 start.go:360] acquireMachinesLock for no-preload-340000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:01.762289   13476 start.go:364] duration metric: took 326.883833ms to acquireMachinesLock for "no-preload-340000"
	I0701 05:16:01.762457   13476 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:01.762480   13476 fix.go:54] fixHost starting: 
	I0701 05:16:01.763274   13476 fix.go:112] recreateIfNeeded on no-preload-340000: state=Stopped err=<nil>
	W0701 05:16:01.763301   13476 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:01.776074   13476 out.go:177] * Restarting existing qemu2 VM for "no-preload-340000" ...
	I0701 05:16:01.780235   13476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:4b:a2:81:f7:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/no-preload-340000/disk.qcow2
	I0701 05:16:01.789884   13476 main.go:141] libmachine: STDOUT: 
	I0701 05:16:01.789955   13476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:01.790038   13476 fix.go:56] duration metric: took 27.561166ms for fixHost
	I0701 05:16:01.790053   13476 start.go:83] releasing machines lock for "no-preload-340000", held for 27.725875ms
	W0701 05:16:01.790214   13476 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-340000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-340000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:01.802039   13476 out.go:177] 
	W0701 05:16:01.806203   13476 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:01.806234   13476 out.go:239] * 
	* 
	W0701 05:16:01.808447   13476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:01.819006   13476 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-340000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (55.579584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-416000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-416000 create -f testdata/busybox.yaml: exit status 1 (32.089917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-416000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-416000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (29.956625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (33.079583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-340000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (34.180708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-340000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-340000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-340000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.5825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-340000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-340000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (31.316459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-416000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-416000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-416000 describe deploy/metrics-server -n kube-system: exit status 1 (29.251541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-416000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-416000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (31.986583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-340000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (33.3035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-340000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-340000 --alsologtostderr -v=1: exit status 83 (41.507292ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-340000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-340000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:02.094593   13512 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:02.094739   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:02.094742   13512 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:02.094744   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:02.094878   13512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:02.095111   13512 out.go:298] Setting JSON to false
	I0701 05:16:02.095121   13512 mustload.go:65] Loading cluster: no-preload-340000
	I0701 05:16:02.095323   13512 config.go:182] Loaded profile config "no-preload-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:02.098501   13512 out.go:177] * The control-plane node no-preload-340000 host is not running: state=Stopped
	I0701 05:16:02.102537   13512 out.go:177]   To start a cluster, run: "minikube start -p no-preload-340000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-340000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (30.816125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (28.403916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-340000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-318000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-318000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.751274833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-318000" primary control-plane node in "default-k8s-diff-port-318000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:02.500151   13543 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:02.500298   13543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:02.500301   13543 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:02.500303   13543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:02.500429   13543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:02.501498   13543 out.go:298] Setting JSON to false
	I0701 05:16:02.517663   13543 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8131,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:16:02.517757   13543 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:16:02.522571   13543 out.go:177] * [default-k8s-diff-port-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:16:02.529519   13543 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:16:02.529553   13543 notify.go:220] Checking for updates...
	I0701 05:16:02.535481   13543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:16:02.538514   13543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:16:02.541526   13543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:16:02.544509   13543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:16:02.547492   13543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:16:02.550817   13543 config.go:182] Loaded profile config "embed-certs-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:02.550875   13543 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:02.550926   13543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:16:02.555389   13543 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:16:02.562488   13543 start.go:297] selected driver: qemu2
	I0701 05:16:02.562495   13543 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:16:02.562503   13543 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:16:02.564706   13543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 05:16:02.567440   13543 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:16:02.570587   13543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:16:02.570612   13543 cni.go:84] Creating CNI manager for ""
	I0701 05:16:02.570622   13543 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:16:02.570638   13543 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:16:02.570669   13543 start.go:340] cluster config:
	{Name:default-k8s-diff-port-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:02.574244   13543 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:16:02.581468   13543 out.go:177] * Starting "default-k8s-diff-port-318000" primary control-plane node in "default-k8s-diff-port-318000" cluster
	I0701 05:16:02.585491   13543 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:16:02.585508   13543 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:16:02.585519   13543 cache.go:56] Caching tarball of preloaded images
	I0701 05:16:02.585583   13543 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:16:02.585589   13543 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:16:02.585650   13543 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/default-k8s-diff-port-318000/config.json ...
	I0701 05:16:02.585662   13543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/default-k8s-diff-port-318000/config.json: {Name:mk85cd72d2b33ce3dc2188eea839773004da4aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:16:02.586024   13543 start.go:360] acquireMachinesLock for default-k8s-diff-port-318000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:02.586060   13543 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "default-k8s-diff-port-318000"
	I0701 05:16:02.586073   13543 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:16:02.586102   13543 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:16:02.590516   13543 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:16:02.607700   13543 start.go:159] libmachine.API.Create for "default-k8s-diff-port-318000" (driver="qemu2")
	I0701 05:16:02.607725   13543 client.go:168] LocalClient.Create starting
	I0701 05:16:02.607793   13543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:16:02.607823   13543 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:02.607831   13543 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:02.607871   13543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:16:02.607894   13543 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:02.607900   13543 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:02.608363   13543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:16:02.737223   13543 main.go:141] libmachine: Creating SSH key...
	I0701 05:16:02.806812   13543 main.go:141] libmachine: Creating Disk image...
	I0701 05:16:02.806817   13543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:16:02.806972   13543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:02.816104   13543 main.go:141] libmachine: STDOUT: 
	I0701 05:16:02.816122   13543 main.go:141] libmachine: STDERR: 
	I0701 05:16:02.816172   13543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2 +20000M
	I0701 05:16:02.824227   13543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:16:02.824242   13543 main.go:141] libmachine: STDERR: 
	I0701 05:16:02.824259   13543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:02.824263   13543 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:16:02.824288   13543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:41:a5:59:f6:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:02.825878   13543 main.go:141] libmachine: STDOUT: 
	I0701 05:16:02.825893   13543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:02.825918   13543 client.go:171] duration metric: took 218.187875ms to LocalClient.Create
	I0701 05:16:04.828116   13543 start.go:128] duration metric: took 2.241983084s to createHost
	I0701 05:16:04.828202   13543 start.go:83] releasing machines lock for "default-k8s-diff-port-318000", held for 2.242118166s
	W0701 05:16:04.828281   13543 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:04.840547   13543 out.go:177] * Deleting "default-k8s-diff-port-318000" in qemu2 ...
	W0701 05:16:04.863678   13543 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:04.863709   13543 start.go:728] Will try again in 5 seconds ...
	I0701 05:16:09.865896   13543 start.go:360] acquireMachinesLock for default-k8s-diff-port-318000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:09.866355   13543 start.go:364] duration metric: took 379.125µs to acquireMachinesLock for "default-k8s-diff-port-318000"
	I0701 05:16:09.866482   13543 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:16:09.866743   13543 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:16:09.872485   13543 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:16:09.923097   13543 start.go:159] libmachine.API.Create for "default-k8s-diff-port-318000" (driver="qemu2")
	I0701 05:16:09.923171   13543 client.go:168] LocalClient.Create starting
	I0701 05:16:09.923283   13543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:16:09.923347   13543 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:09.923361   13543 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:09.923420   13543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:16:09.923463   13543 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:09.923473   13543 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:09.924023   13543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:16:10.065759   13543 main.go:141] libmachine: Creating SSH key...
	I0701 05:16:10.151689   13543 main.go:141] libmachine: Creating Disk image...
	I0701 05:16:10.151694   13543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:16:10.151866   13543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:10.161047   13543 main.go:141] libmachine: STDOUT: 
	I0701 05:16:10.161074   13543 main.go:141] libmachine: STDERR: 
	I0701 05:16:10.161131   13543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2 +20000M
	I0701 05:16:10.168979   13543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:16:10.168998   13543 main.go:141] libmachine: STDERR: 
	I0701 05:16:10.169021   13543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:10.169026   13543 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:16:10.169063   13543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:dd:6c:49:0b:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:10.170688   13543 main.go:141] libmachine: STDOUT: 
	I0701 05:16:10.170709   13543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:10.170724   13543 client.go:171] duration metric: took 247.544916ms to LocalClient.Create
	I0701 05:16:12.172910   13543 start.go:128] duration metric: took 2.306110625s to createHost
	I0701 05:16:12.172960   13543 start.go:83] releasing machines lock for "default-k8s-diff-port-318000", held for 2.30657075s
	W0701 05:16:12.173323   13543 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:12.187963   13543 out.go:177] 
	W0701 05:16:12.192014   13543 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:12.192067   13543 out.go:239] * 
	* 
	W0701 05:16:12.194804   13543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:12.207023   13543 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-318000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (61.268834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-416000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-416000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (6.660900625s)

                                                
                                                
-- stdout --
	* [embed-certs-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-416000" primary control-plane node in "embed-certs-416000" cluster
	* Restarting existing qemu2 VM for "embed-certs-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:05.609272   13571 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:05.609383   13571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:05.609386   13571 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:05.609389   13571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:05.609537   13571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:05.610497   13571 out.go:298] Setting JSON to false
	I0701 05:16:05.626506   13571 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8134,"bootTime":1719828031,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:16:05.626573   13571 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:16:05.631370   13571 out.go:177] * [embed-certs-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:16:05.638327   13571 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:16:05.638401   13571 notify.go:220] Checking for updates...
	I0701 05:16:05.645370   13571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:16:05.648296   13571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:16:05.651515   13571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:16:05.654387   13571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:16:05.657268   13571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:16:05.660621   13571 config.go:182] Loaded profile config "embed-certs-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:05.660882   13571 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:16:05.664265   13571 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:16:05.671279   13571 start.go:297] selected driver: qemu2
	I0701 05:16:05.671284   13571 start.go:901] validating driver "qemu2" against &{Name:embed-certs-416000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:embed-certs-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:05.671334   13571 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:16:05.673758   13571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:16:05.673794   13571 cni.go:84] Creating CNI manager for ""
	I0701 05:16:05.673803   13571 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:16:05.673836   13571 start.go:340] cluster config:
	{Name:embed-certs-416000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-416000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:05.677601   13571 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:16:05.685331   13571 out.go:177] * Starting "embed-certs-416000" primary control-plane node in "embed-certs-416000" cluster
	I0701 05:16:05.689366   13571 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:16:05.689385   13571 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:16:05.689393   13571 cache.go:56] Caching tarball of preloaded images
	I0701 05:16:05.689454   13571 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:16:05.689461   13571 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:16:05.689520   13571 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/embed-certs-416000/config.json ...
	I0701 05:16:05.689953   13571 start.go:360] acquireMachinesLock for embed-certs-416000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:05.689987   13571 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "embed-certs-416000"
	I0701 05:16:05.689997   13571 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:05.690002   13571 fix.go:54] fixHost starting: 
	I0701 05:16:05.690120   13571 fix.go:112] recreateIfNeeded on embed-certs-416000: state=Stopped err=<nil>
	W0701 05:16:05.690128   13571 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:05.698319   13571 out.go:177] * Restarting existing qemu2 VM for "embed-certs-416000" ...
	I0701 05:16:05.702382   13571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4c:88:5a:16:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:16:05.704296   13571 main.go:141] libmachine: STDOUT: 
	I0701 05:16:05.704314   13571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:05.704343   13571 fix.go:56] duration metric: took 14.340708ms for fixHost
	I0701 05:16:05.704347   13571 start.go:83] releasing machines lock for "embed-certs-416000", held for 14.355625ms
	W0701 05:16:05.704353   13571 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:05.704394   13571 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:05.704399   13571 start.go:728] Will try again in 5 seconds ...
	I0701 05:16:10.706663   13571 start.go:360] acquireMachinesLock for embed-certs-416000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:12.173131   13571 start.go:364] duration metric: took 1.466314s to acquireMachinesLock for "embed-certs-416000"
	I0701 05:16:12.173322   13571 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:12.173345   13571 fix.go:54] fixHost starting: 
	I0701 05:16:12.174106   13571 fix.go:112] recreateIfNeeded on embed-certs-416000: state=Stopped err=<nil>
	W0701 05:16:12.174132   13571 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:12.189340   13571 out.go:177] * Restarting existing qemu2 VM for "embed-certs-416000" ...
	I0701 05:16:12.195198   13571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4c:88:5a:16:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/embed-certs-416000/disk.qcow2
	I0701 05:16:12.204148   13571 main.go:141] libmachine: STDOUT: 
	I0701 05:16:12.204209   13571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:12.204282   13571 fix.go:56] duration metric: took 30.936792ms for fixHost
	I0701 05:16:12.204300   13571 start.go:83] releasing machines lock for "embed-certs-416000", held for 31.103292ms
	W0701 05:16:12.204476   13571 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:12.217868   13571 out.go:177] 
	W0701 05:16:12.222018   13571 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:12.222076   13571 out.go:239] * 
	* 
	W0701 05:16:12.225186   13571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:12.233104   13571 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-416000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (50.941792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-318000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-318000 create -f testdata/busybox.yaml: exit status 1 (31.057834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-318000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-318000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (29.074584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (35.030167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-416000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (33.102292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-416000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-416000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-416000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.907167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-416000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-416000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (30.743458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-318000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-318000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-318000 describe deploy/metrics-server -n kube-system: exit status 1 (28.555875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-318000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-318000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (34.751834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-416000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (30.956625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-416000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-416000 --alsologtostderr -v=1: exit status 83 (49.348416ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-416000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:12.500570   13604 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:12.500729   13604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:12.500732   13604 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:12.500734   13604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:12.500875   13604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:12.501099   13604 out.go:298] Setting JSON to false
	I0701 05:16:12.501107   13604 mustload.go:65] Loading cluster: embed-certs-416000
	I0701 05:16:12.501327   13604 config.go:182] Loaded profile config "embed-certs-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:12.505065   13604 out.go:177] * The control-plane node embed-certs-416000 host is not running: state=Stopped
	I0701 05:16:12.512853   13604 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-416000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-416000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (30.843875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (28.305833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-050000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-050000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.747648125s)

                                                
                                                
-- stdout --
	* [newest-cni-050000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-050000" primary control-plane node in "newest-cni-050000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-050000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:12.808493   13627 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:12.808630   13627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:12.808633   13627 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:12.808636   13627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:12.808769   13627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:12.809873   13627 out.go:298] Setting JSON to false
	I0701 05:16:12.825881   13627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8141,"bootTime":1719828031,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:16:12.825949   13627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:16:12.831012   13627 out.go:177] * [newest-cni-050000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:16:12.838009   13627 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:16:12.838061   13627 notify.go:220] Checking for updates...
	I0701 05:16:12.844933   13627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:16:12.847961   13627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:16:12.850972   13627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:16:12.853939   13627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:16:12.856925   13627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:16:12.860248   13627 config.go:182] Loaded profile config "default-k8s-diff-port-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:12.860306   13627 config.go:182] Loaded profile config "multinode-037000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:12.860369   13627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:16:12.864935   13627 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 05:16:12.871961   13627 start.go:297] selected driver: qemu2
	I0701 05:16:12.871969   13627 start.go:901] validating driver "qemu2" against <nil>
	I0701 05:16:12.871977   13627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:16:12.874167   13627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0701 05:16:12.874191   13627 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0701 05:16:12.881920   13627 out.go:177] * Automatically selected the socket_vmnet network
	I0701 05:16:12.884996   13627 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0701 05:16:12.885031   13627 cni.go:84] Creating CNI manager for ""
	I0701 05:16:12.885040   13627 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:16:12.885044   13627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 05:16:12.885067   13627 start.go:340] cluster config:
	{Name:newest-cni-050000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:12.888742   13627 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:16:12.894964   13627 out.go:177] * Starting "newest-cni-050000" primary control-plane node in "newest-cni-050000" cluster
	I0701 05:16:12.899016   13627 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:16:12.899035   13627 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:16:12.899045   13627 cache.go:56] Caching tarball of preloaded images
	I0701 05:16:12.899115   13627 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:16:12.899127   13627 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:16:12.899199   13627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/newest-cni-050000/config.json ...
	I0701 05:16:12.899210   13627 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/newest-cni-050000/config.json: {Name:mk0e9ab6c2989ec0b29f816cec44470c2a399a21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 05:16:12.899582   13627 start.go:360] acquireMachinesLock for newest-cni-050000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:12.899617   13627 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "newest-cni-050000"
	I0701 05:16:12.899631   13627 start.go:93] Provisioning new machine with config: &{Name:newest-cni-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:16:12.899709   13627 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:16:12.908889   13627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:16:12.926929   13627 start.go:159] libmachine.API.Create for "newest-cni-050000" (driver="qemu2")
	I0701 05:16:12.926970   13627 client.go:168] LocalClient.Create starting
	I0701 05:16:12.927026   13627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:16:12.927056   13627 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:12.927066   13627 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:12.927108   13627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:16:12.927136   13627 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:12.927144   13627 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:12.927618   13627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:16:13.057261   13627 main.go:141] libmachine: Creating SSH key...
	I0701 05:16:13.090900   13627 main.go:141] libmachine: Creating Disk image...
	I0701 05:16:13.090906   13627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:16:13.091077   13627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:13.100202   13627 main.go:141] libmachine: STDOUT: 
	I0701 05:16:13.100221   13627 main.go:141] libmachine: STDERR: 
	I0701 05:16:13.100268   13627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2 +20000M
	I0701 05:16:13.107976   13627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:16:13.107992   13627 main.go:141] libmachine: STDERR: 
	I0701 05:16:13.108003   13627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:13.108007   13627 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:16:13.108037   13627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:cf:ca:ad:2d:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:13.109642   13627 main.go:141] libmachine: STDOUT: 
	I0701 05:16:13.109658   13627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:13.109675   13627 client.go:171] duration metric: took 182.700792ms to LocalClient.Create
	I0701 05:16:15.111871   13627 start.go:128] duration metric: took 2.212128541s to createHost
	I0701 05:16:15.112013   13627 start.go:83] releasing machines lock for "newest-cni-050000", held for 2.212313666s
	W0701 05:16:15.112057   13627 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:15.126354   13627 out.go:177] * Deleting "newest-cni-050000" in qemu2 ...
	W0701 05:16:15.152786   13627 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:15.152842   13627 start.go:728] Will try again in 5 seconds ...
	I0701 05:16:20.155119   13627 start.go:360] acquireMachinesLock for newest-cni-050000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:20.155646   13627 start.go:364] duration metric: took 388.167µs to acquireMachinesLock for "newest-cni-050000"
	I0701 05:16:20.155776   13627 start.go:93] Provisioning new machine with config: &{Name:newest-cni-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 05:16:20.156083   13627 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 05:16:20.164836   13627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 05:16:20.214222   13627 start.go:159] libmachine.API.Create for "newest-cni-050000" (driver="qemu2")
	I0701 05:16:20.214280   13627 client.go:168] LocalClient.Create starting
	I0701 05:16:20.214387   13627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/ca.pem
	I0701 05:16:20.214455   13627 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:20.214469   13627 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:20.214537   13627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19166-9507/.minikube/certs/cert.pem
	I0701 05:16:20.214581   13627 main.go:141] libmachine: Decoding PEM data...
	I0701 05:16:20.214595   13627 main.go:141] libmachine: Parsing certificate...
	I0701 05:16:20.215082   13627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso...
	I0701 05:16:20.375085   13627 main.go:141] libmachine: Creating SSH key...
	I0701 05:16:20.454686   13627 main.go:141] libmachine: Creating Disk image...
	I0701 05:16:20.454695   13627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 05:16:20.454897   13627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2.raw /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:20.464211   13627 main.go:141] libmachine: STDOUT: 
	I0701 05:16:20.464229   13627 main.go:141] libmachine: STDERR: 
	I0701 05:16:20.464276   13627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2 +20000M
	I0701 05:16:20.472095   13627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 05:16:20.472109   13627 main.go:141] libmachine: STDERR: 
	I0701 05:16:20.472127   13627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:20.472134   13627 main.go:141] libmachine: Starting QEMU VM...
	I0701 05:16:20.472172   13627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9c:aa:ab:58:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:20.473706   13627 main.go:141] libmachine: STDOUT: 
	I0701 05:16:20.473728   13627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:20.473741   13627 client.go:171] duration metric: took 259.450625ms to LocalClient.Create
	I0701 05:16:22.475914   13627 start.go:128] duration metric: took 2.319793875s to createHost
	I0701 05:16:22.476028   13627 start.go:83] releasing machines lock for "newest-cni-050000", held for 2.320345834s
	W0701 05:16:22.476316   13627 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-050000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-050000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:22.491070   13627 out.go:177] 
	W0701 05:16:22.494089   13627 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:22.494134   13627 out.go:239] * 
	* 
	W0701 05:16:22.496891   13627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:22.506049   13627 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-050000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000: exit status 7 (65.294625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-050000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-318000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-318000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.87821075s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-318000" primary control-plane node in "default-k8s-diff-port-318000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-318000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-318000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:16.691439   13657 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:16.691572   13657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:16.691575   13657 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:16.691577   13657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:16.691707   13657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:16.692722   13657 out.go:298] Setting JSON to false
	I0701 05:16:16.708796   13657 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8145,"bootTime":1719828031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:16:16.708861   13657 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:16:16.713672   13657 out.go:177] * [default-k8s-diff-port-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:16:16.720563   13657 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:16:16.720599   13657 notify.go:220] Checking for updates...
	I0701 05:16:16.727672   13657 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:16:16.729108   13657 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:16:16.732642   13657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:16:16.735669   13657 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:16:16.738680   13657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:16:16.741868   13657 config.go:182] Loaded profile config "default-k8s-diff-port-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:16.742129   13657 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:16:16.746693   13657 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:16:16.753647   13657 start.go:297] selected driver: qemu2
	I0701 05:16:16.753666   13657 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:16.753719   13657 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:16:16.755885   13657 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 05:16:16.755919   13657 cni.go:84] Creating CNI manager for ""
	I0701 05:16:16.755926   13657 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:16:16.755959   13657 start.go:340] cluster config:
	{Name:default-k8s-diff-port-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-318000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:16.759496   13657 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:16:16.765649   13657 out.go:177] * Starting "default-k8s-diff-port-318000" primary control-plane node in "default-k8s-diff-port-318000" cluster
	I0701 05:16:16.769619   13657 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:16:16.769637   13657 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:16:16.769652   13657 cache.go:56] Caching tarball of preloaded images
	I0701 05:16:16.769708   13657 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:16:16.769713   13657 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:16:16.769766   13657 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/default-k8s-diff-port-318000/config.json ...
	I0701 05:16:16.770196   13657 start.go:360] acquireMachinesLock for default-k8s-diff-port-318000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:16.770231   13657 start.go:364] duration metric: took 29µs to acquireMachinesLock for "default-k8s-diff-port-318000"
	I0701 05:16:16.770241   13657 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:16.770246   13657 fix.go:54] fixHost starting: 
	I0701 05:16:16.770372   13657 fix.go:112] recreateIfNeeded on default-k8s-diff-port-318000: state=Stopped err=<nil>
	W0701 05:16:16.770380   13657 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:16.773547   13657 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-318000" ...
	I0701 05:16:16.781631   13657 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:dd:6c:49:0b:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:16.783678   13657 main.go:141] libmachine: STDOUT: 
	I0701 05:16:16.783697   13657 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:16.783727   13657 fix.go:56] duration metric: took 13.481ms for fixHost
	I0701 05:16:16.783743   13657 start.go:83] releasing machines lock for "default-k8s-diff-port-318000", held for 13.4965ms
	W0701 05:16:16.783753   13657 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:16.783787   13657 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:16.783792   13657 start.go:728] Will try again in 5 seconds ...
	I0701 05:16:21.786038   13657 start.go:360] acquireMachinesLock for default-k8s-diff-port-318000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:22.476190   13657 start.go:364] duration metric: took 690.022791ms to acquireMachinesLock for "default-k8s-diff-port-318000"
	I0701 05:16:22.476354   13657 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:22.476373   13657 fix.go:54] fixHost starting: 
	I0701 05:16:22.477143   13657 fix.go:112] recreateIfNeeded on default-k8s-diff-port-318000: state=Stopped err=<nil>
	W0701 05:16:22.477170   13657 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:22.491031   13657 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-318000" ...
	I0701 05:16:22.494324   13657 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:dd:6c:49:0b:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/default-k8s-diff-port-318000/disk.qcow2
	I0701 05:16:22.503876   13657 main.go:141] libmachine: STDOUT: 
	I0701 05:16:22.503937   13657 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:22.504004   13657 fix.go:56] duration metric: took 27.631875ms for fixHost
	I0701 05:16:22.504019   13657 start.go:83] releasing machines lock for "default-k8s-diff-port-318000", held for 27.793375ms
	W0701 05:16:22.504211   13657 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-318000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-318000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:22.517953   13657 out.go:177] 
	W0701 05:16:22.522155   13657 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:22.522197   13657 out.go:239] * 
	* 
	W0701 05:16:22.524988   13657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:22.534998   13657 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-318000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (56.686417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-318000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (38.502708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-318000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-318000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-318000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.158ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-318000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-318000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (33.929459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-318000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (29.382959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-318000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-318000 --alsologtostderr -v=1: exit status 83 (41.230833ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-318000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-318000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:22.793829   13693 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:22.793992   13693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:22.793995   13693 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:22.793998   13693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:22.794150   13693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:22.794361   13693 out.go:298] Setting JSON to false
	I0701 05:16:22.794369   13693 mustload.go:65] Loading cluster: default-k8s-diff-port-318000
	I0701 05:16:22.794544   13693 config.go:182] Loaded profile config "default-k8s-diff-port-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:22.799234   13693 out.go:177] * The control-plane node default-k8s-diff-port-318000 host is not running: state=Stopped
	I0701 05:16:22.803146   13693 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-318000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-318000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (28.936584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (27.915291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-050000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-050000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.181572042s)

                                                
                                                
-- stdout --
	* [newest-cni-050000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-050000" primary control-plane node in "newest-cni-050000" cluster
	* Restarting existing qemu2 VM for "newest-cni-050000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-050000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:26.317813   13732 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:26.317946   13732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:26.317950   13732 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:26.317952   13732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:26.318096   13732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:26.319169   13732 out.go:298] Setting JSON to false
	I0701 05:16:26.335230   13732 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8155,"bootTime":1719828031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 05:16:26.335298   13732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 05:16:26.340034   13732 out.go:177] * [newest-cni-050000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 05:16:26.346863   13732 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 05:16:26.346889   13732 notify.go:220] Checking for updates...
	I0701 05:16:26.354030   13732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 05:16:26.355397   13732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 05:16:26.357998   13732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 05:16:26.360977   13732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 05:16:26.364052   13732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 05:16:26.367326   13732 config.go:182] Loaded profile config "newest-cni-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:26.367607   13732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 05:16:26.372015   13732 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 05:16:26.379003   13732 start.go:297] selected driver: qemu2
	I0701 05:16:26.379016   13732 start.go:901] validating driver "qemu2" against &{Name:newest-cni-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:newest-cni-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:26.379067   13732 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 05:16:26.381278   13732 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0701 05:16:26.381306   13732 cni.go:84] Creating CNI manager for ""
	I0701 05:16:26.381314   13732 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 05:16:26.381335   13732 start.go:340] cluster config:
	{Name:newest-cni-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-050000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 05:16:26.384733   13732 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 05:16:26.390988   13732 out.go:177] * Starting "newest-cni-050000" primary control-plane node in "newest-cni-050000" cluster
	I0701 05:16:26.395033   13732 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 05:16:26.395048   13732 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 05:16:26.395058   13732 cache.go:56] Caching tarball of preloaded images
	I0701 05:16:26.395122   13732 preload.go:173] Found /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 05:16:26.395128   13732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 05:16:26.395197   13732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/newest-cni-050000/config.json ...
	I0701 05:16:26.395667   13732 start.go:360] acquireMachinesLock for newest-cni-050000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:26.395702   13732 start.go:364] duration metric: took 28.791µs to acquireMachinesLock for "newest-cni-050000"
	I0701 05:16:26.395712   13732 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:26.395718   13732 fix.go:54] fixHost starting: 
	I0701 05:16:26.395839   13732 fix.go:112] recreateIfNeeded on newest-cni-050000: state=Stopped err=<nil>
	W0701 05:16:26.395847   13732 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:26.399929   13732 out.go:177] * Restarting existing qemu2 VM for "newest-cni-050000" ...
	I0701 05:16:26.410206   13732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9c:aa:ab:58:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:26.412237   13732 main.go:141] libmachine: STDOUT: 
	I0701 05:16:26.412261   13732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:26.412288   13732 fix.go:56] duration metric: took 16.569375ms for fixHost
	I0701 05:16:26.412293   13732 start.go:83] releasing machines lock for "newest-cni-050000", held for 16.586875ms
	W0701 05:16:26.412299   13732 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:26.412332   13732 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:26.412337   13732 start.go:728] Will try again in 5 seconds ...
	I0701 05:16:31.414603   13732 start.go:360] acquireMachinesLock for newest-cni-050000: {Name:mk9bc6ef90b361c073fa429f411636be4e95fac6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 05:16:31.415142   13732 start.go:364] duration metric: took 418.291µs to acquireMachinesLock for "newest-cni-050000"
	I0701 05:16:31.415289   13732 start.go:96] Skipping create...Using existing machine configuration
	I0701 05:16:31.415312   13732 fix.go:54] fixHost starting: 
	I0701 05:16:31.416028   13732 fix.go:112] recreateIfNeeded on newest-cni-050000: state=Stopped err=<nil>
	W0701 05:16:31.416058   13732 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 05:16:31.425446   13732 out.go:177] * Restarting existing qemu2 VM for "newest-cni-050000" ...
	I0701 05:16:31.430657   13732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9c:aa:ab:58:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19166-9507/.minikube/machines/newest-cni-050000/disk.qcow2
	I0701 05:16:31.440254   13732 main.go:141] libmachine: STDOUT: 
	I0701 05:16:31.440328   13732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 05:16:31.440448   13732 fix.go:56] duration metric: took 25.136791ms for fixHost
	I0701 05:16:31.440471   13732 start.go:83] releasing machines lock for "newest-cni-050000", held for 25.303125ms
	W0701 05:16:31.440641   13732 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-050000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-050000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 05:16:31.446471   13732 out.go:177] 
	W0701 05:16:31.450517   13732 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 05:16:31.450550   13732 out.go:239] * 
	* 
	W0701 05:16:31.453102   13732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 05:16:31.459483   13732 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-050000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000: exit status 7 (70.279416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-050000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-050000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000: exit status 7 (30.804041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-050000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-050000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-050000 --alsologtostderr -v=1: exit status 83 (42.644208ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-050000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-050000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 05:16:31.644048   13748 out.go:291] Setting OutFile to fd 1 ...
	I0701 05:16:31.644197   13748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:31.644200   13748 out.go:304] Setting ErrFile to fd 2...
	I0701 05:16:31.644202   13748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 05:16:31.644338   13748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 05:16:31.644564   13748 out.go:298] Setting JSON to false
	I0701 05:16:31.644572   13748 mustload.go:65] Loading cluster: newest-cni-050000
	I0701 05:16:31.644763   13748 config.go:182] Loaded profile config "newest-cni-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 05:16:31.649384   13748 out.go:177] * The control-plane node newest-cni-050000 host is not running: state=Stopped
	I0701 05:16:31.653419   13748 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-050000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-050000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000: exit status 7 (30.238584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-050000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000: exit status 7 (29.743916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-050000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.2/json-events 8.28
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.08
18 TestDownloadOnly/v1.30.2/DeleteAll 0.11
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 12.67
39 TestErrorSpam/start 0.37
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.74
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.68
55 TestFunctional/serial/CacheCmd/cache/add_local 1.06
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.09
93 TestFunctional/parallel/License 0.25
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.24
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.1
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.85
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.45
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.35
258 TestNoKubernetes/serial/Stop 3.21
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
275 TestStartStop/group/old-k8s-version/serial/Stop 3.09
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
288 TestStartStop/group/no-preload/serial/Stop 3.82
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
297 TestStartStop/group/embed-certs/serial/Stop 3.37
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 4.03
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
317 TestStartStop/group/newest-cni/serial/Stop 3.5
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-666000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-666000: exit status 85 (98.608417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |          |
	|         | -p download-only-666000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 04:49:39
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 04:49:39.546065   10005 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:49:39.546202   10005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:39.546206   10005 out.go:304] Setting ErrFile to fd 2...
	I0701 04:49:39.546208   10005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:39.546336   10005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	W0701 04:49:39.546437   10005 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19166-9507/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19166-9507/.minikube/config/config.json: no such file or directory
	I0701 04:49:39.547714   10005 out.go:298] Setting JSON to true
	I0701 04:49:39.565302   10005 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6548,"bootTime":1719828031,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:49:39.565391   10005 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:49:39.571537   10005 out.go:97] [download-only-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:49:39.571702   10005 notify.go:220] Checking for updates...
	W0701 04:49:39.571763   10005 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 04:49:39.575441   10005 out.go:169] MINIKUBE_LOCATION=19166
	I0701 04:49:39.581527   10005 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:49:39.585403   10005 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:49:39.588504   10005 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:49:39.591494   10005 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	W0701 04:49:39.596442   10005 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 04:49:39.596641   10005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:49:39.599485   10005 out.go:97] Using the qemu2 driver based on user configuration
	I0701 04:49:39.599505   10005 start.go:297] selected driver: qemu2
	I0701 04:49:39.599509   10005 start.go:901] validating driver "qemu2" against <nil>
	I0701 04:49:39.599603   10005 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 04:49:39.602493   10005 out.go:169] Automatically selected the socket_vmnet network
	I0701 04:49:39.608015   10005 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0701 04:49:39.608139   10005 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 04:49:39.608194   10005 cni.go:84] Creating CNI manager for ""
	I0701 04:49:39.608211   10005 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0701 04:49:39.608281   10005 start.go:340] cluster config:
	{Name:download-only-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:49:39.612385   10005 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:49:39.616496   10005 out.go:97] Downloading VM boot image ...
	I0701 04:49:39.616513   10005 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/iso/arm64/minikube-v1.33.1-1719412936-19142-arm64.iso
	I0701 04:49:44.065320   10005 out.go:97] Starting "download-only-666000" primary control-plane node in "download-only-666000" cluster
	I0701 04:49:44.065359   10005 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 04:49:44.117179   10005 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 04:49:44.117201   10005 cache.go:56] Caching tarball of preloaded images
	I0701 04:49:44.117364   10005 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 04:49:44.123897   10005 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0701 04:49:44.123904   10005 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:44.197634   10005 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0701 04:49:49.206174   10005 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:49.206325   10005 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:49.902069   10005 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0701 04:49:49.902290   10005 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/download-only-666000/config.json ...
	I0701 04:49:49.902308   10005 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/download-only-666000/config.json: {Name:mkca6ff7504630bcae3120017be8656fc2eb8640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 04:49:49.902576   10005 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 04:49:49.903018   10005 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0701 04:49:50.243094   10005 out.go:169] 
	W0701 04:49:50.248075   10005 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60 0x108c7da60] Decompressors:map[bz2:0x14000887aa0 gz:0x14000887aa8 tar:0x14000887a50 tar.bz2:0x14000887a60 tar.gz:0x14000887a70 tar.xz:0x14000887a80 tar.zst:0x14000887a90 tbz2:0x14000887a60 tgz:0x14000887a70 txz:0x14000887a80 tzst:0x14000887a90 xz:0x14000887ab0 zip:0x14000887ac0 zst:0x14000887ab8] Getters:map[file:0x140004a6950 http:0x140008b41e0 https:0x140008b4230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0701 04:49:50.248099   10005 out_reason.go:110] 
	W0701 04:49:50.256005   10005 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 04:49:50.259970   10005 out.go:169] 
	
	
	* The control-plane node download-only-666000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-666000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-666000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (8.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-897000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-897000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 : (8.274738584s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (8.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-897000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-897000: exit status 85 (80.792334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | -p download-only-666000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| delete  | -p download-only-666000        | download-only-666000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT | 01 Jul 24 04:49 PDT |
	| start   | -o=json --download-only        | download-only-897000 | jenkins | v1.33.1 | 01 Jul 24 04:49 PDT |                     |
	|         | -p download-only-897000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 04:49:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 04:49:50.679614   10032 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:49:50.679751   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:50.679758   10032 out.go:304] Setting ErrFile to fd 2...
	I0701 04:49:50.679760   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:49:50.679892   10032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:49:50.680922   10032 out.go:298] Setting JSON to true
	I0701 04:49:50.696930   10032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6559,"bootTime":1719828031,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:49:50.696997   10032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:49:50.701957   10032 out.go:97] [download-only-897000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:49:50.702056   10032 notify.go:220] Checking for updates...
	I0701 04:49:50.706091   10032 out.go:169] MINIKUBE_LOCATION=19166
	I0701 04:49:50.708962   10032 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:49:50.712979   10032 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:49:50.716026   10032 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:49:50.718995   10032 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	W0701 04:49:50.725011   10032 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 04:49:50.725168   10032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:49:50.727981   10032 out.go:97] Using the qemu2 driver based on user configuration
	I0701 04:49:50.727991   10032 start.go:297] selected driver: qemu2
	I0701 04:49:50.727993   10032 start.go:901] validating driver "qemu2" against <nil>
	I0701 04:49:50.728055   10032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 04:49:50.730968   10032 out.go:169] Automatically selected the socket_vmnet network
	I0701 04:49:50.736120   10032 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0701 04:49:50.736217   10032 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 04:49:50.736254   10032 cni.go:84] Creating CNI manager for ""
	I0701 04:49:50.736273   10032 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0701 04:49:50.736282   10032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 04:49:50.736321   10032 start.go:340] cluster config:
	{Name:download-only-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:49:50.739939   10032 iso.go:125] acquiring lock: {Name:mke8e030ee585cc9977e1c7054c733d53bd0e241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 04:49:50.742898   10032 out.go:97] Starting "download-only-897000" primary control-plane node in "download-only-897000" cluster
	I0701 04:49:50.742914   10032 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:49:50.796533   10032 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:49:50.796553   10032 cache.go:56] Caching tarball of preloaded images
	I0701 04:49:50.796715   10032 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:49:50.801907   10032 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0701 04:49:50.801914   10032 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:50.877834   10032 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4?checksum=md5:3bd37d965c85173ac77cdcc664938efd -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0701 04:49:55.071001   10032 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:55.071165   10032 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0701 04:49:55.616450   10032 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 04:49:55.616646   10032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/download-only-897000/config.json ...
	I0701 04:49:55.616662   10032 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19166-9507/.minikube/profiles/download-only-897000/config.json: {Name:mk5f3f7b199c34183fdd935ac6b39a7d48994e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 04:49:55.617009   10032 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 04:49:55.617128   10032 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19166-9507/.minikube/cache/darwin/arm64/v1.30.2/kubectl
	
	
	* The control-plane node download-only-897000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-897000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-897000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-076000 --alsologtostderr --binary-mirror http://127.0.0.1:51936 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-076000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-076000
--- PASS: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-711000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-711000: exit status 85 (55.161417ms)

                                                
                                                
-- stdout --
	* Profile "addons-711000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-711000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-711000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-711000: exit status 85 (57.928209ms)

                                                
                                                
-- stdout --
	* Profile "addons-711000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-711000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (12.67s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (12.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status: exit status 7 (31.114541ms)

                                                
                                                
-- stdout --
	nospam-145000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status: exit status 7 (30.113417ms)

                                                
                                                
-- stdout --
	nospam-145000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status: exit status 7 (29.676084ms)

                                                
                                                
-- stdout --
	nospam-145000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause: exit status 83 (38.693167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-145000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-145000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause: exit status 83 (39.373875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-145000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-145000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause: exit status 83 (38.768291ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-145000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-145000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause: exit status 83 (39.765458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-145000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-145000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause: exit status 83 (38.926459ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-145000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-145000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause: exit status 83 (38.732208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-145000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-145000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.74s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 stop: (3.426582208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 stop: (3.226691417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-145000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-145000 stop: (3.08427725s)
--- PASS: TestErrorSpam/stop (9.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19166-9507/.minikube/files/etc/test/nested/copy/10003/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2210124486/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cache add minikube-local-cache-test:functional-750000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 cache delete minikube-local-cache-test:functional-750000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-750000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 config get cpus: exit status 14 (30.523916ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 config get cpus: exit status 14 (37.214292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-750000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-750000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (160.997417ms)

                                                
                                                
-- stdout --
	* [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:51:35.334547   10610 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:51:35.334765   10610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.334770   10610 out.go:304] Setting ErrFile to fd 2...
	I0701 04:51:35.334774   10610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.334971   10610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:51:35.336356   10610 out.go:298] Setting JSON to false
	I0701 04:51:35.356566   10610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6664,"bootTime":1719828031,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:51:35.356636   10610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:51:35.361041   10610 out.go:177] * [functional-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0701 04:51:35.368018   10610 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:51:35.368068   10610 notify.go:220] Checking for updates...
	I0701 04:51:35.375051   10610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:51:35.377985   10610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:51:35.381016   10610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:51:35.384007   10610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:51:35.386941   10610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:51:35.390299   10610 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:51:35.390597   10610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:51:35.394974   10610 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 04:51:35.402026   10610 start.go:297] selected driver: qemu2
	I0701 04:51:35.402032   10610 start.go:901] validating driver "qemu2" against &{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:51:35.402080   10610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:51:35.407772   10610 out.go:177] 
	W0701 04:51:35.411943   10610 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0701 04:51:35.415970   10610 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-750000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-750000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-750000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.023417ms)

                                                
                                                
-- stdout --
	* [functional-750000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 04:51:35.561503   10621 out.go:291] Setting OutFile to fd 1 ...
	I0701 04:51:35.561771   10621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.561776   10621 out.go:304] Setting ErrFile to fd 2...
	I0701 04:51:35.561778   10621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 04:51:35.561973   10621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19166-9507/.minikube/bin
	I0701 04:51:35.563416   10621 out.go:298] Setting JSON to false
	I0701 04:51:35.580030   10621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6664,"bootTime":1719828031,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0701 04:51:35.580105   10621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0701 04:51:35.585024   10621 out.go:177] * [functional-750000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0701 04:51:35.590014   10621 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 04:51:35.590072   10621 notify.go:220] Checking for updates...
	I0701 04:51:35.602861   10621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	I0701 04:51:35.606091   10621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 04:51:35.609027   10621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 04:51:35.612032   10621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	I0701 04:51:35.615044   10621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 04:51:35.618337   10621 config.go:182] Loaded profile config "functional-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 04:51:35.618597   10621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 04:51:35.622981   10621 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0701 04:51:35.629974   10621 start.go:297] selected driver: qemu2
	I0701 04:51:35.629979   10621 start.go:901] validating driver "qemu2" against &{Name:functional-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 04:51:35.630037   10621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 04:51:35.635109   10621 out.go:177] 
	W0701 04:51:35.639033   10621 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0701 04:51:35.643039   10621 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.226088041s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-750000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image rm gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-750000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 image save --daemon gcr.io/google-containers/addon-resizer:functional-750000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-750000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "47.624083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "31.984458ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.742833ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.90675ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013530583s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-750000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-750000
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-750000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-750000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-571000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-571000 --output=json --user=testUser: (3.853492125s)
--- PASS: TestJSONOutput/stop/Command (3.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-632000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-632000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (103.828542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"be3cfffa-7f0e-4d35-9316-deb9dc048d7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-632000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccc70beb-602a-4273-9a0b-fad58913162b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19166"}}
	{"specversion":"1.0","id":"87c1f260-abf3-4709-bfeb-e0777aaab256","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig"}}
	{"specversion":"1.0","id":"9acff947-9782-4672-b873-b06e758e21e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"723086a9-e51a-41f9-9ebf-645fcadd57c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"78596648-d4f6-43cf-9e7c-32cc54785453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube"}}
	{"specversion":"1.0","id":"d3b6a521-dea6-44ef-8b65-086ac4ea52d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5647e532-de71-4ad3-9f2e-1d6abaaa58f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-632000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-632000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-730000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.913833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19166-9507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19166-9507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-730000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-730000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.564916ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-730000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.691658291s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.659509958s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-730000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-730000: (3.205444416s)
--- PASS: TestNoKubernetes/serial/Stop (3.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-730000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-730000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.515375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-730000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-841000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-821000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-821000 --alsologtostderr -v=3: (3.094016292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-821000 -n old-k8s-version-821000: exit status 7 (49.614666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-821000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-340000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-340000 --alsologtostderr -v=3: (3.823954542s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-340000 -n no-preload-340000: exit status 7 (56.976709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-340000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-416000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-416000 --alsologtostderr -v=3: (3.368299584s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-416000 -n embed-certs-416000: exit status 7 (58.456291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-416000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-318000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-318000 --alsologtostderr -v=3: (4.033361041s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (4.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-318000 -n default-k8s-diff-port-318000: exit status 7 (62.337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-318000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-050000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-050000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-050000 --alsologtostderr -v=3: (3.504479334s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-050000 -n newest-cni-050000: exit status 7 (63.004166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-050000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3389535658/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719834663707515000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3389535658/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719834663707515000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3389535658/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719834663707515000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3389535658/001/test-1719834663707515000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.860958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.281125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.119667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.431375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.345292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.534792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.957833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo umount -f /mount-9p": exit status 83 (52.816083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3389535658/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2053327413/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.661ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.249375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.889333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.285334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.548709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.833625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.195584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "sudo umount -f /mount-9p": exit status 83 (46.497416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-750000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2053327413/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup757592236/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup757592236/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup757592236/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (80.634959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (83.282292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (85.090458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (86.575834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (83.071083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (87.584083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-750000 ssh "findmnt -T" /mount1: exit status 83 (83.714833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-750000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup757592236/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup757592236/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-750000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup757592236/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.86s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-731000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-731000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-731000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-731000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731000"

                                                
                                                
----------------------- debugLogs end: cilium-731000 [took: 2.205808708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-731000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-731000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-373000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-373000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard